Sample records for arc segmentation algorithm

  1. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  2. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  3. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  4. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  5. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  6. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  7. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lu, B; Yan, G

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less

  8. SpArcFiRe: Scalable automated detection of spiral galaxy arm segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Darren R.; Hayes, Wayne B., E-mail: drdavis@uci.edu, E-mail: whayes@uci.edu

    Given an approximately centered image of a spiral galaxy, we describe an entirely automated method that finds, centers, and sizes the galaxy (possibly masking nearby stars and other objects if necessary in order to isolate the galaxy itself) and then automatically extracts structural information about the spiral arms. For each arm segment found, we list the pixels in that segment, allowing image analysis on a per-arm-segment basis. We also perform a least-squares fit of a logarithmic spiral arc to the pixels in that segment, giving per-arc parameters, such as the pitch angle, arm segment length, location, etc. The algorithm takesmore » about one minute per galaxies, and can easily be scaled using parallelism. We have run it on all ∼644,000 Sloan objects that are larger than 40 pixels across and classified as 'galaxies'. We find a very good correlation between our quantitative description of a spiral structure and the qualitative description provided by Galaxy Zoo humans. Our objective, quantitative measures of structure demonstrate the difficulty in defining exactly what constitutes a spiral 'arm', leading us to prefer the term 'arm segment'. We find that pitch angle often varies significantly segment-to-segment in a single spiral galaxy, making it difficult to define the pitch angle for a single galaxy. We demonstrate how our new database of arm segments can be queried to find galaxies satisfying specific quantitative visual criteria. For example, even though our code does not explicitly find rings, a good surrogate is to look for galaxies having one long, low-pitch-angle arm—which is how our code views ring galaxies. SpArcFiRe is available at http://sparcfire.ics.uci.edu.« less

  9. On the role of the optimization algorithm of RapidArc(®) volumetric modulated arc therapy on plan quality and efficiency.

    PubMed

    Vanetti, Eugenio; Nicolini, Giorgia; Nord, Janne; Peltola, Jarkko; Clivio, Alessandro; Fogliata, Antonella; Cozzi, Luca

    2011-11-01

    The RapidArc volumetric modulated arc therapy (VMAT) planning process is based on a core engine, the so-called progressive resolution optimizer (PRO). This is the optimization algorithm used to determine the combination of field shapes, segment weights (with dose rate and gantry speed variations), which best approximate the desired dose distribution in the inverse planning problem. A study was performed to assess the behavior of two versions of PRO. These two versions mostly differ in the way continuous variables describing the modulated arc are sampled into discrete control points, in the planning efficiency and in the presence of some new features. The analysis aimed to assess (i) plan quality, (ii) technical delivery aspects, (iii) agreement between delivery and calculations, and (iv) planning efficiency of the two versions. RapidArc plans were generated for four groups of patients (five patients each): anal canal, advanced lung, head and neck, and multiple brain metastases and were designed to test different levels of planning complexity and anatomical features. Plans from optimization with PRO2 (first generation of RapidArc optimizer) were compared against PRO3 (second generation of the algorithm). Additional plans were optimized with PRO3 using new features: the jaw tracking, the intermediate dose and the air cavity correction options. Results showed that (i) plan quality was generally improved with PRO3 and, although not for all parameters, some of the scored indices showed a macroscopic improvement with PRO3. (ii) PRO3 optimization leads to simpler patterns of the dynamic parameters particularly for dose rate. (iii) No differences were observed between the two algorithms in terms of pretreatment quality assurance measurements and (iv) PRO3 optimization was generally faster, with a time reduction of a factor approximately 3.5 with respect to PRO2. These results indicate that PRO3 is either clinically beneficial or neutral in terms of dosimetric quality while it showed significant advantages in speed and technical aspects.

  10. Four-dimensional guidance algorithms for aircraft in an air traffic control environment

    NASA Technical Reports Server (NTRS)

    Pecsvaradi, T.

    1975-01-01

    Theoretical development and computer implementation of three guidance algorithms are presented. From a small set of input parameters the algorithms generate the ground track, altitude profile, and speed profile required to implement an experimental 4-D guidance system. Given a sequence of waypoints that define a nominal flight path, the first algorithm generates a realistic, flyable ground track consisting of a sequence of straight line segments and circular arcs. Each circular turn is constrained by the minimum turning radius of the aircraft. The ground track and the specified waypoint altitudes are used as inputs to the second algorithm which generates the altitude profile. The altitude profile consists of piecewise constant flight path angle segments, each segment lying within specified upper and lower bounds. The third algorithm generates a feasible speed profile subject to constraints on the rate of change in speed, permissible speed ranges, and effects of wind. Flight path parameters are then combined into a chronological sequence to form the 4-D guidance vectors. These vectors can be used to drive the autopilot/autothrottle of the aircraft so that a 4-D flight path could be tracked completely automatically; or these vectors may be used to drive the flight director and other cockpit displays, thereby enabling the pilot to track a 4-D flight path manually.

  11. Lithospheric buckling and intra-arc stresses: A mechanism for arc segmentation

    NASA Technical Reports Server (NTRS)

    Nelson, Kerri L.

    1989-01-01

    Comparison of segment development of a number of arcs has shown that consistent relationships between segmentation, volcanism and variable stresses exists. Researchers successfully modeled these relationships using the conceptual model of lithospheric buckling of Yamaoka et al. (1986; 1987). Lithosphere buckling (deformation) provides the needed mechanism to explain segmentation phenomenon; offsets in volcanic fronts, distribution of calderas within segments, variable segment stresses and the chemical diversity seen between segment boundary and segment interior magmas.

  12. Trajectory generation for an on-road autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Horst, John; Barbera, Anthony

    2006-05-01

    We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.

  13. Finite element analyses of thin film active grazing incidence x-ray optics

    NASA Astrophysics Data System (ADS)

    Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.

    2010-09-01

    The Chandra X-ray Observatory, with its sub-arc second resolution, has revolutionized X-ray astronomy by revealing an extremely complex X-ray sky and demonstrating the power of the X-ray window in exploring fundamental astrophysical problems. Larger area telescopes of still higher angular resolution promise further advances. We are engaged in the development of a mission concept, Generation-X, a 0.1 arc second resolution x-ray telescope with tens of square meters of collecting area, 500 times that of Chandra. To achieve these two requirements of imaging and area, we are developing a grazing incidence telescope comprised of many mirror segments. Each segment is an adjustable mirror that is a section of a paraboloid or hyperboloid, aligned and figure corrected in situ on-orbit. To that end, finite element analyses of thin glass mirrors are performed to determine influence functions for each actuator on the mirrors, in order to develop algorithms for correction of mirror deformations. The effects of several mirror mounting schemes are also studied. The finite element analysis results, combined with measurements made on prototype mirrors, will be used to further refine the correction algorithms.

  14. WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Huang, C; Keall, P

    2015-06-15

    Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less

  15. Investigation of a subsonic-arc-attachment thruster using segmented anodes

    NASA Technical Reports Server (NTRS)

    Berns, Darren H.; Sankovic, John M.; Sarmiento, Charles J.

    1993-01-01

    To investigate high frequency arc instabilities observed in subsonic-arc-attachment thrusters, a 3 kW, segmented-anode arc jet was designed and tested using hydrogen as the propellant. The thruster nozzle geometry was scaled from a 30 kW design previously tested in the 1960's. By observing the current to each segment and the arc voltage, it was determined that the 75-200 kHz instabilities were results of axial movements of the arc anode attachment point. The arc attachment point was fully contained in the subsonic portion of the nozzle for nearly all flow rates. The effects of isolating selected segments were investigated. In some cases, forcing the arc downstream caused the restrike to cease. Finally, decreasing the background pressure from 18 to 0.05 Pa affected the pressure distribution in the nozzle including the pressure in the subsonic arc chamber.

  16. Petrologic, tectonic, and metallogenic evolution of the southern segment of the ancestral Cascades magmatic arc, California and Nevada

    USGS Publications Warehouse

    du Bray, Edward A.; John, David A.; Cousens, Brian L.

    2013-01-01

    Although rocks in the two arc segments have similar metal abundances, they are metallogenically distinct. Small porphyry copper deposits are characteristic of the northern segment whereas significant epithermal precious metal deposits are most commonly associated with the southern segment. These metallogenic differences are also fundamentally linked to the tectonic settings and crustal regimes within which these two arc segments evolved.

  17. Investigation of a subsonic-arc-attachment thruster using segmented anodes

    NASA Technical Reports Server (NTRS)

    Berns, Darren H.; Sankovic, John M.; Sarmiento, Charles J.

    1993-01-01

    To investigate high frequency arc instabilities observed in subsonic-arc-attachment thrusters, a 3 kW, segmented-anode arcjet was designed and tested using hydrogen as the propellant. The thruster nozzle geometry was scaled from a 30 kW design previously tested in the 1960's. By observing the current to each segment and the arc voltage, it was determined that the 75-200 kHz instabilities were results of axial movements of the arc anode attachment point. The arc attachment point was fully contained in the subsonic portion of the nozzle for nearly all flow rates. The effects of isolating selected segments were investigated. In some cases, forcing the arc downstream caused the restrike to cease. Finally, decreasing the background pressure from 18 Pa to 0.05 Pa affected the pressure distribution in the nozzle, including the pressure in the subsonic arc chamber.

  18. Fault model of the 2014 Cephalonia seismic sequence - Evidence of spatiotemporal fault segmentation along the NW edge of Aegean Arc

    NASA Astrophysics Data System (ADS)

    Saltogianni, Vasso; Moschas, Fanis; Stiros, Stathis

    2017-04-01

    Finite fault models (FFM) are presented for the two main shocks of the 2014 Cephalonia (Ionian Sea, Greece) seismic sequence (M 6.0) which produced extreme peak ground accelerations ( 0.7g) in the west edge of the Aegean Arc, an area in which the poor coverage by seismological and GPS/INSAR data makes FFM a real challenge. Modeling was based on co-seismic GPS data and on the recently introduced TOPological INVersion algorithm. The latter is a novel uniform grid search-based technique in n-dimensional spaces, is based on the concept of stochastic variables and which can identify multiple unconstrained ("free") solutions in a specified search space. Derived FFMs for the 2014 earthquakes correspond to an essentially strike slip fault and of part of a shallow thrust, the surface projection of both of which run roughly along the west coast of Cephalonia. Both faults correlate with pre-existing faults. The 2014 faults, in combination with the faults of the 2003 and 2015 Leucas earthquakes farther NE, form a string of oblique slip, partly overlapping fault segments with variable geometric and kinematic characteristics along the NW edge of the Aegean Arc. This composite fault, usually regarded as the Cephalonia Transform Fault, accommodates shear along this part of the Arc. Because of the highly fragmented crust, dominated by major thrusts in this area, fault activity is associated with 20km long segments and magnitude 6.0-6.5 earthquakes recurring in intervals of a few seconds to 10 years.

  19. History of Satellite Orbit Determination at NSWCDD

    DTIC Science & Technology

    2018-01-31

    run . Segment 40 did pass editing and its use was optional after Segment 20. Segment 30 needed to be run before Segment 80. Segment 70 was run as...control cards required to run the program. These included a CHARGE card related to usage charges and various REQUEST, ATTACH, and CATALOG cards...each) could be done in a single run after the long-arc solution had converged. These short arcs used the pass matrices from the long-arc run in their

  20. SU-E-T-508: End to End Testing of a Prototype Eclipse Module for Planning Modulated Arc Therapy On the Siemens Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, L; Sarkar, V; Spiessens, S

    2014-06-01

    Purpose: The latest clinical implementation of the Siemens Artiste linac allows for delivery of modulated arcs (mARC) using full-field flattening filter free (FFF) photon beams. The maximum doserate of 2000 MU/min is well suited for high dose treatments such as SBRT. We tested and report on the performance of a prototype Eclipse TPS module supporting mARC capability on the Artiste platform. Method: our spine SBRT patients originally treated with 12/13 field static-gantry IMRT (SGIMRT) were chosen for this study. These plans were designed to satisfy RTOG0631 guidelines with a prescription of 16Gy in a single fraction. The cases were re-plannedmore » as mARC plans in the prototype Eclipse module using the 7MV FFF beam and required to satisfy RTOG0631 requirements. All plans were transferred from Eclipse, delivered on a Siemens Artiste linac and dose-validated using the Delta4 system. Results: All treatment plans were straightforwardly developed, in timely fashion, without challenge or inefficiency using the prototype module. Due to the limited number of segments in a single arc, mARC plans required 2-3 full arcs to yield plan quality comparable to SGIMRT plans containing over 250 total segments. The average (3%/3mm) gamma pass-rate for all arcs was 98.5±1.1%, thus demonstrating both excellent dose prediction by the AAA dose algorithm and excellent delivery fidelity. Mean delivery times for the mARC plans(10.5±1.7min) were 50-70% lower than the SGIMRT plans(26±2min), with both delivered at 2000 MU/min. Conclusion: A prototype Eclipse module capable of planning for Burst Mode modulated arc delivery on the Artiste platform has been tested and found to perform efficiently and accurately for treatment plan development and delivered-dose prediction. Further investigation of more treatment sites is being carried out and data will be presented.« less

  1. Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Li, Yan; Shekhar, Raj; Huang, David

    2002-05-01

    Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.

  2. Some methods of encoding simple visual images for use with a sparse distributed memory, with applications to character recognition

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1989-01-01

    To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.

  3. Geochemical Database for Igneous Rocks of the Ancestral Cascades Arc - Southern Segment, California and Nevada

    USGS Publications Warehouse

    du Bray, Edward A.; John, David A.; Putirka, Keith; Cousens, Brian L.

    2009-01-01

    Volcanic rocks that form the southern segment of the Cascades magmatic arc are an important manifestation of Cenozoic subduction and associated magmatism in western North America. Until recently, these rocks had been little studied and no systematic compilation of existing composition data had been assembled. This report is a compilation of all available chemical data for igneous rocks that constitute the southern segment of the ancestral Cascades magmatic arc and complement a previously completed companion compilation that pertains to rocks that constitute the northern segment of the arc. Data for more than 2,000 samples from a diversity of sources were identified and incorporated in the database. The association between these igneous rocks and spatially and temporally associated mineral deposits is well established and suggests a probable genetic relationship. The ultimate goal of the related research is an evaluation of the time-space-compositional evolution of magmatism associated with the southern Cascades arc segment and identification of genetic associations between magmatism and mineral deposits in this region.

  4. Arc-Second Alignment of International X-Ray Observatory Mirror Segments in a Fixed Structure

    NASA Technical Reports Server (NTRS)

    Evans, Tyler, C.; Chan, Kai-Wing; Saha, Timo T.

    2010-01-01

    The optics for the International X-Ray Observatory (IXO) require alignment and integration of about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arc-seconds. These mirror segments are 0.4 mm thick, and 200 to 400 mm in size, which makes it hard to meet the strict angular resolution requirement of 5 arc-seconds for the telescope. This paper outlines the precise alignment, verification testing, and permanent bonding techniques developed at NASA's Goddard Space Flight Center (GSFC). These techniques are used to overcome the challenge of transferring thin mirror segments from a temporary mount to a fixed structure with arc-second alignment and minimal figure distortion. Recent advances in technology development in addition to the automation of several processes have produced significant results. Recent advances in the mirror fixture process known as the suspension mount has allowed for a mirror to be mounted to a fixture with minimal distortion. Once on the fixture, mirror segments have been aligned to around 5 arc-seconds which is halfway to the goal of 2.5 arc-seconds per mirror segment. This paper will highlight the recent advances in alignment, testing, and permanent bonding techniques as well as the results they have produced.

  5. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  6. The ArcB Leucine Zipper Domain Is Required for Proper ArcB Signaling

    PubMed Central

    Nuñez Oreza, Luis Alberto; Alvarez, Adrián F.; Arias-Olguín, Imilla I.; Torres Larios, Alfredo; Georgellis, Dimitris

    2012-01-01

    The Arc two-component system modulates the expression of numerous genes in response to respiratory growth conditions. This system comprises ArcA as the response regulator and ArcB as the sensor kinase. ArcB is a tripartite histidine kinase whose activity is regulated by the oxidation of two cytosol-located redox-active cysteine residues that participate in intermolecular disulfide bond formation. Here, we report that the ArcB protein segment covering residues 70–121, fulfills the molecular characteristics of a leucine zipper containing coiled coil structure. Also, mutational analyses of this segment reveal three different phenotypical effects to be distributed along the coiled coil structure of ArcB, demonstrating that this motif is essential for proper ArcB signaling. PMID:22666479

  7. Riparian Land Use/Land Cover Data for Three Study Units in Group II of the Nutrient Enrichment Effects Topical Study of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Johnson, Michaela R.; Clark, Jimmy M.; Dickinson, Ross G.; Sanocki, Chris A.; Tranmer, Andrew W.

    2009-01-01

    This data set was developed as part of the National Water-Quality Assessment (NAWQA) Program, Nutrient Enrichment Effects Topical (NEET) study. This report is concerned with three of the eight NEET study units distributed across the United States: Ozark Plateaus, Upper Mississippi River Basin, and Upper Snake River Basin, collectively known as Group II of the NEET study. Ninety stream reaches were investigated during 2006-08 in these three study units. Stream segments, with lengths equal to the base-10 logarithm of the basin area, were delineated upstream from the stream reaches through the use of digital orthophoto quarter-quadrangle (DOQQ) imagery. The analysis area for each stream segment was defined by a streamside buffer extending laterally to 250 meters from the stream segment. Delineation of landuse and land-cover (LULC) map units within stream-segment buffers was completed using on-screen digitizing of riparian LULC classes interpreted from the DOQQ. LULC units were classified using a strategy consisting of nine classes. National Wetlands Inventory (NWI) data were used to aid in wetland classification. Longitudinal riparian transects (lines offset from the stream segments) were generated digitally, used to sample the LULC maps, and partitioned in accord with the intersected LULC map-unit types. These longitudinal samples yielded the relative linear extent and sequence of each LULC type within the riparian zone at the segment scale. The resulting areal and linear estimates of LULC extent filled in the spatial-scale gap between the 30-meter resolution of the 1990s National Land Cover Dataset and the reach-level habitat assessment data collected onsite routinely for NAWQA ecological sampling. The resulting data consisted of 12 geospatial data sets: LULC within 25 meters of the stream reach (polygon); LULC within 50 meters of the stream reach (polygon); LULC within 50 meters of the stream segment (polygon); LULC within 100 meters of the stream segment (polygon); LULC within 150 meters of the stream segment (polygon); LULC within 250 meters of the stream segment (polygon); frequency of gaps in woody vegetation at the reach scale (arc); stream reaches (arc); longitudinal LULC transect sample at the reach scale (arc); frequency of gaps in woody vegetation at the segment scale (arc); stream segments (arc); and longitudinal LULC transect sample at the segment scale (arc).

  8. Automated Quantification of Arbitrary Arm-Segment Structure in Spiral Galaxies

    NASA Astrophysics Data System (ADS)

    Davis, Darren Robert

    This thesis describes a system that, given approximately-centered images of spiral galaxies, produces quantitative descriptions of spiral galaxy structure without the need for per-image human input. This structure information consists of a list of spiral arm segments, each associated with a fitted logarithmic spiral arc and a pixel region. This list-of-arcs representation allows description of arbitrary spiral galaxy structure: the arms do not need to be symmetric, may have forks or bends, and, more generally, may be arranged in any manner with a consistent spiral-pattern center (non-merging galaxies have a sufficiently well-defined center). Such flexibility is important in order to accommodate the myriad structure variations observed in spiral galaxies. From the arcs produced from our method it is possible to calculate measures of spiral galaxy structure such as winding direction, winding tightness, arm counts, asymmetry, or other values of interest (including user-defined measures). In addition to providing information about the spiral arm "skeleton" of each galaxy, our method can enable analyses of brightness within individual spiral arms, since we provide the pixel regions associated with each spiral arm segment. For winding direction, arm tightness, and arm count, comparable information is available (to various extents) from previous efforts; to the extent that such information is available, we find strong correspondence with our output. We also characterize the changes to (and invariances in) our output as a function of modifications to important algorithm parameters. By enabling generation of extensive data about spiral galaxy structure from large-scale sky surveys, our method will enable new discoveries and tests regarding the nature of galaxies and the universe, and will facilitate subsequent work to automatically fit detailed brightness models of spiral galaxies.

  9. Arc segmentation and seismicity in the Solomon Islands arc, SW Pacific

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chu; Frohlich, Cliff; Taylor, Frederick W.; Burr, George; van Ufford, Andrew Quarles

    2011-07-01

    This paper evaluates neotectonic segmentation in the Solomon Islands forearc, and considers how it relates to regional tectonic evolution and the extent of ruptures of large megathrust earthquakes. We first consider regional geomorphology and Quaternary vertical displacements, especially uplifted coral reef terraces. Then we consider geographic seismicity patterns, aftershock areas and vertical displacements for large earthquakes, focal mechanisms, and along-arc variations in seismic moment release to evaluate the relationship between neotectonically defined segments and seismicity. Notably, one major limitation of using seismicity to evaluate arc segmentation is the matter of accurately defining earthquake rupture zones. For example, shoreline uplifts associated with the 1 April 2007 M w 8.1 Western Solomons earthquake indicate that the along-arc extent of rupture was about 50 km smaller than the aftershock area. Thus if we had relied on aftershocks alone to identify the 2007 rupture zone, as we do for most historical earthquakes, we would have missed the rupture's relationship to a major morphologic feature. In many cases, the imprecision of defining rupture zones without surface deformation data may be largely responsible for the poor mismatches to neotectonic boundaries. However, when a precise paleoseismic vertical deformation history is absent, aftershocks are often the best available tool for inferring rupture geometries. Altogether we identify 16 segments in the Solomon Islands. These comprise three major tectonic regimes or supersegments that correspond respectively to the forearc areas of Guadalcanal-Makira, the New Georgia island group, and Bougainville Islands. Subduction of the young and relatively shallow and buoyant Woodlark Basin and spreading system distinguishes the central New Georgia supersegment from the two neighboring supersegments. The physiographic expression of the San Cristobal trench is largely absent, but bathymetric mapping of the surface trace of the interplate thrust zone defines it adequately. The New Georgia supersegment has smaller arc segments, and more islands due to general late Quaternary forearc uplift very close to the trench where vertical displacement rates tend to be faster; prior to the 2007 earthquake it had much lower rates of seismic activity than the neighboring supersegments. Generally the mean along-arc lateral extent of Solomon arc segments is about 75 km, somewhat smaller than the segments reported in some other island arcs such as Japan (~ 100-260 km), but larger than those of the Tonga (30-80 km) and Central New Hebrides arcs (30-110 km). These differences may be real but it may occur simply because the coral-friendly tropical environment of the South Pacific arcs, numerous emerged forearc islands, and high seismicity rates provide an unusually favorable situation for observing variations in vertical tectonic activity and thus for identifying segment boundaries. Over the past century seismic slip in the Solomons, as indicated by seismic moment release, has corresponded to about half the plate convergence rate; however, there are notable variations along the arc. Even with the 2007 earthquake, the long-term moment release rate in the New Georgia supersegment is relatively low, and this may indicate that large earthquakes are imminent.

  10. Evaluation of the clinical usefulness of modulated arc treatment

    NASA Astrophysics Data System (ADS)

    Lee, Young Kyu; Jang, Hong Seok; Kim, Yeon Sil; Choi, Byung Ock; Kang, Young-Nam; Nam, Sang Hee; Park, Hyeong Wook; Kim, Shin Wook; Shin, Hun Joo; Lee, Jae Choon; Kim, Ji Na; Park, Sung Kwang; Kim, Jin Young

    2015-07-01

    The purpose of this study is to evaluate the clinical usefulness of modulated arc (mARC) treatment techniques. The mARC treatment plans for non-small-cell lung cancer (NSCLC) patients were made in order to verify the clinical usefulness of mARC. A pre-study was conducted to find the best plan condition for mARC treatment, and the usefulness of the mARC treatment plan was evaluated by comparing it with other Arc treatment plans such as tomotherapy and RapidArc plans. In the case of mARC, the optimal condition for the mARC plan was determined by comparing the dosimetric performance of the mARC plans developed by using various parameters, which included the photon energy (6 MV, 10 MV), the optimization point angle (6°- 10°intervals), and the total number of segments (36 - 59 segments). The best dosimetric performance of mARC was observed at a 10 MV photon energy, a point angle 6 degrees, and 59 segments. The treatment plans for the three different techniques were compared by using the following parameters: the conformity index (CI), homogeneity index (HI), the target coverage, the dose to the OARs, the number of monitor units (MU), the beam on time, and the normal tissue complication probability (NTCP). As a result, the three different treatment techniques showed similar target coverages. The mARC plan had the lowest V20 (volume of lung receiving > 20 Gy) and MU per fraction compared with both the RapidArc and the tomotherapy plans. The mARC plan reduced the beam on time as well. Therefore, the results of this study provide satisfactory evidence that the mARC technique can be considered as a useful clinical technique for radiation treatment.

  11. Mimicking human expert interpretation of remotely sensed raster imagery by using a novel segmentation analysis within ArcGIS

    NASA Astrophysics Data System (ADS)

    Le Bas, Tim; Scarth, Anthony; Bunting, Peter

    2015-04-01

    Traditional computer based methods for the interpretation of remotely sensed imagery use each pixel individually or the average of a small window of pixels to calculate a class or thematic value, which provides an interpretation. However when a human expert interprets imagery, the human eye is excellent at finding coherent and homogenous areas and edge features. It may therefore be advantageous for computer analysis to mimic human interpretation. A new toolbox for ArcGIS 10.x will be presented that segments the data layers into a set of polygons. Each polygon is defined by a K-means clustering and region growing algorithm, thus finding areas, their edges and any lineations in the imagery. Attached to each polygon are the characteristics of the imagery such as mean and standard deviation of the pixel values, within the polygon. The segmentation of imagery into a jigsaw of polygons also has the advantage that the human interpreter does not need to spend hours digitising the boundaries. The segmentation process has been taken from the RSGIS library of analysis and classification routines (Bunting et al., 2014). These routines are freeware and have been modified to be available in the ArcToolbox under the Windows (v7) operating system. Input to the segmentation process is a multi-layered raster image, for example; a Landsat image, or a set of raster datasets made up from derivatives of topography. The size and number of polygons are set by the user and are dependent on the imagery used. Examples will be presented of data from the marine environment utilising bathymetric depth, slope, rugosity and backscatter from a multibeam system. Meaningful classification of the polygons using their numerical characteristics is the next goal. Object based image analysis (OBIA) should help this workflow. Fully calibrated imagery systems will allow numerical classification to be translated into more readily understandable terms. Peter Bunting, Daniel Clewley, Richard M. Lucas and Sam Gillingham. 2014. The Remote Sensing and GIS Software Library (RSGISLib), Computers & Geosciences. Volume 62, Pages 216-226 http://dx.doi.org/10.1016/j.cageo.2013.08.007.

  12. Cerebrovascular plaque segmentation using object class uncertainty snake in MR images

    NASA Astrophysics Data System (ADS)

    Das, Bipul; Saha, Punam K.; Wolf, Ronald; Song, Hee Kwon; Wright, Alexander C.; Wehrli, Felix W.

    2005-04-01

    Atherosclerotic cerebrovascular disease leads to formation of lipid-laden plaques that can form emboli when ruptured causing blockage to cerebral vessels. The clinical manifestation of this event sequence is stroke; a leading cause of disability and death. In vivo MR imaging provides detailed image of vascular architecture for the carotid artery making it suitable for analysis of morphological features. Assessing the status of carotid arteries that supplies blood to the brain is of primary interest to such investigations. Reproducible quantification of carotid artery dimensions in MR images is essential for plaque analysis. Manual segmentation being the only method presently makes it time consuming and sensitive to inter and intra observer variability. This paper presents a deformable model for lumen and vessel wall segmentation of carotid artery from MR images. The major challenges of carotid artery segmentation are (a) low signal-to-noise ratio, (b) background intensity inhomogeneity and (c) indistinct inner and/or outer vessel wall. We propose a new, effective object-class uncertainty based deformable model with additional features tailored toward this specific application. Object-class uncertainty optimally utilizes MR intensity characteristics of various anatomic entities that enable the snake to avert leakage through fuzzy boundaries. To strengthen the deformable model for this application, some other properties are attributed to it in the form of (1) fully arc-based deformation using a Gaussian model to maximally exploit vessel wall smoothness, (2) construction of a forbidden region for outer-wall segmentation to reduce interferences by prominent lumen features and (3) arc-based landmark for efficient user interaction. The algorithm has been tested upon T1- and PD- weighted images. Measures of lumen area and vessel wall area are computed from segmented data of 10 patient MR images and their accuracy and reproducibility are examined. These results correspond exceptionally well with manual segmentation completed by radiology experts. Reproducibility of the proposed method is estimated for both intra- and inter-operator studies.

  13. Riparian Land Use/Land Cover Data for Five Study Units in the Nutrient Enrichment Effects Topical Study of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    Johnson, Michaela R.; Buell, Gary R.; Kim, Moon H.; Nardi, Mark R.

    2007-01-01

    This dataset was developed as part of the National Water-Quality Assessment (NAWQA) Program, Nutrient Enrichment Effects Topical (NEET) study for five study units distributed across the United States: Apalachicola-Chattahoochee-Flint River Basin, Central Columbia Plateau-Yakima River Basin, Central Nebraska Basins, Potomac River Basin and Delmarva Peninsula, and White, Great and Little Miami River Basins. One hundred forty-three stream reaches were examined as part of the NEET study conducted 2003-04. Stream segments, with lengths equal to the logarithm of the basin area, were delineated upstream from the downstream ends of the stream reaches with the use of digital orthophoto quarter quadrangles (DOQQ) or selected from the high-resolution National Hydrography Dataset (NHD). Use of the NHD was necessary when the stream was not distinguishable in the DOQQ because of dense tree canopy. The analysis area for each stream segment was defined by a buffer beginning at the segment extending to 250 meters lateral to the stream segment. Delineation of land use/land cover (LULC) map units within stream segment buffers was conducted using on-screen digitizing of riparian LULC classes interpreted from the DOQQ. LULC units were mapped using a classification strategy consisting of nine classes. National Wetlands Inventory (NWI) data were used to aid in wetland classification. Longitudinal transect sampling lines offset from the stream segments were generated and partitioned into the underlying LULC types. These longitudinal samples yielded the relative linear extent and sequence of each LULC type within the riparian zone at the segment scale. The resulting areal and linear LULC data filled in the spatial-scale gap between the 30-meter resolution of the National Land Cover Dataset and the reach-level habitat assessment data collected onsite routinely for NAWQA ecological sampling. The final data consisted of 12 geospatial datasets: LULC within 25 meters of the stream reach (polygon); LULC within 50 meters of the stream reach (polygon); LULC within 50 meters of the stream segment (polygon); LULC within 100 meters of the stream segment (polygon); LULC within 150 meters of the stream segment (polygon); LULC within 250 meters of the stream segment (polygon); frequency of gaps in woody vegetation LULC at the reach scale (arc); stream reaches (arc); longitudinal LULC at the reach scale (arc); frequency of gaps in woody vegetation LULC at the segment scale (arc); stream segments (arc); and longitudinal LULC at the segment scale (arc).

  14. A power function profile of a ski jumping in-run hill.

    PubMed

    Zanevskyy, Ihor

    2011-01-01

    The aim of the research was to find a function of the curvilinear segment profile which could make possible to avoid an instantaneous increasing of a curvature and to replace a circle arc segment on the in-run of a ski jump without any correction of the angles of inclination and the length of the straight-line segments. The methods of analytical geometry and trigonometry were used to calculate an optimal in-run hill profile. There were two fundamental conditions of the model: smooth borders between a curvilinear segment and straight-line segments of an in-run hill and concave of the curvilinear segment. Within the framework of this model, the problem has been solved with a reasonable precision. Four functions of a curvilinear segment profile of the in-run hill were investigated: circle arc, inclined quadratic parabola, inclined cubic parabola, and power function. The application of a power function to the in-run profile satisfies equal conditions for replacing a circle arc segment. Geometrical parameters of 38 modern ski jumps were investigated using the methods proposed.

  15. Segmentation of megathrust rupture zones from fore-arc deformation patterns over hundreds to millions of years, Arauco peninsula, Chile

    NASA Astrophysics Data System (ADS)

    Melnick, Daniel; Bookhagen, Bodo; Strecker, Manfred R.; Echtler, Helmut P.

    2009-01-01

    This work explores the control of fore-arc structure on segmentation of megathrust earthquake ruptures using coastal geomorphic markers. The Arauco-Nahuelbuta region at the south-central Chile margin constitutes an anomalous fore-arc sector in terms of topography, geology, and exhumation, located within the overlap between the Concepción and Valdivia megathrust segments. This boundary, however, is only based on ˜500 years of historical records. We integrate deformed marine terraces dated by cosmogenic nuclides, syntectonic sediments, published fission track data, seismic reflection profiles, and microseismicity to analyze this earthquake boundary over 102-106 years. Rapid exhumation of Nahuelbuta's dome-like core started at 4 ± 1.2 Ma, coeval with inversion of the adjacent Arauco basin resulting in emergence of the Arauco peninsula. Here, similarities between topography, spatiotemporal trends in fission track ages, Pliocene-Pleistocene growth strata, and folded marine terraces suggest that margin-parallel shortening has dominated since Pliocene time. This shortening likely results from translation of a fore-arc sliver or microplate, decoupled from South America by an intra-arc strike-slip fault. Microplate collision against a buttress leads to localized uplift at Arauco accrued by deep-seated reverse faults, as well as incipient oroclinal bending. The extent of the Valdivia segment, which ruptured last in 1960 with an Mw 9.5 event, equals the inferred microplate. We propose that mechanical homogeneity of the fore-arc microplate delimits the Valdivia segment and that a marked discontinuity in the continental basement at Arauco acts as an inhomogeneous barrier controlling nucleation and propagation of 1960-type ruptures. As microplate-related deformation occurs since the Pliocene, we propose that this earthquake boundary and the extent of the Valdivia segment are spatially stable seismotectonic features at million year scale.

  16. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  17. Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate.

    PubMed

    Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan

    2009-01-01

    We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

  18. Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies.

    PubMed

    Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang

    2016-09-21

    Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen's segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal &India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes.

  19. Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies

    PubMed Central

    Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang

    2016-01-01

    Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen’s segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal & India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes. PMID:27649782

  20. Structural and functional changes associated with normal and abnormal fundus autofluorescence in patients with retinitis pigmentosa.

    PubMed

    Greenstein, Vivienne C; Duncker, Tobias; Holopigian, Karen; Carr, Ronald E; Greenberg, Jonathan P; Tsang, Stephen H; Hood, Donald C

    2012-02-01

    To analyze the structure and visual function of regions bordering the hyperautofluorescent ring/arcs in retinitis pigmentosa. Twenty-one retinitis pigmentosa patients (21 eyes) with rings/arcs and 21 normal individuals (21 eyes) were studied. Visual sensitivity in the central 10° was measured with microperimetry. Retinal structure was evaluated with spectral-domain optical coherence tomography. The distance from the fovea to disruption/loss of the inner outer segment (IS/OS) junction and thicknesses of the total receptor plus retinal pigment epithelial complex and outer segment plus retinal pigment epithelial complex layers were measured. Results were compared with measurements of the distance from the fovea to the inner and outer borders of the ring/arc seen on fundus autofluorescence. Disruption/loss of the inner outer segment junction occurred closer to the inner border of the ring/arc and it was closer to the fovea in eight eyes. For 19 eyes, outer segment plus and receptor plus RPE complex thicknesses were significantly decreased at locations closer to the fovea than the appearance of the inner border of hyperautofluorescence. Mean visual sensitivity was decreased inside, across, and outside the ring/arc by 3.5 ± 3.8, 8.9 ± 4.8, and 17.0 ± 2.4 dB, respectively. Structural and functional changes can occur inside the hyperfluorescent ring/arc in retinitis pigmentosa.

  1. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction.

    PubMed

    Wang, Jinke; Guo, Haoyan

    2016-01-01

    This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  2. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Cheung, Joey P.; McGuinness, Christopher; Solberg, Timothy D.

    2017-07-01

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  3. CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife.

    PubMed

    Kearney, Vasant; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2017-06-26

    The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.

  4. Rapid growth of some major segments of continental crust

    NASA Astrophysics Data System (ADS)

    Reymer, Arthur; Schubert, Gerald

    1986-04-01

    Some major segments of continental crust display a narrow range of Sm-Nd crustal formation ages. The sizes of the Canadian shield, the Svecokarelian province of northern Europe, the west-central United States, and the Arabian-Nubian shield suggest rapid crustal growth. Island-arc accretion models rank among the most favored tectonic models for the formation of these areas. A quantitative comparison of the growth rates of these crustal segments to Mesozoic-Cenozoic arc-addition rates shows, however, that island-arc accretion alone seems insufficient to account for the amount of crust that was produced in each of these terrains. Other additional mechanisms, such as hot-spot volcanism and underplating, may have been active in addition to arc accretion. Alternatively, large amounts of preexisting basement have gone so far undetected. *Present address: Department of Marine, Earth and Atmospheric Sciences, North Carolina State University, Raleigh, North Carolina 27695-8202

  5. Geologic field-trip guide to Mount Shasta Volcano, northern California

    USGS Publications Warehouse

    Christiansen, Robert L.; Calvert, Andrew T.; Grove, Timothy L.

    2017-08-18

    The southern part of the Cascades Arc formed in two distinct, extended periods of activity: “High Cascades” volcanoes erupted during about the past 6 million years and were built on a wider platform of Tertiary volcanoes and shallow plutons as old as about 30 Ma, generally called the “Western Cascades.” For the most part, the Shasta segment (for example, Hildreth, 2007; segment 4 of Guffanti and Weaver, 1988) of the arc forms a distinct, fairly narrow axis of short-lived small- to moderate-sized High Cascades volcanoes that erupted lavas, mainly of basaltic-andesite or low-silica-andesite compositions. Western Cascades rocks crop out only sparsely in the Shasta segment; almost all of the following descriptions are of High Cascades features except for a few unusual localities where older, Western Cascades rocks are exposed to view along the route of the field trip.The High Cascades arc axis in this segment of the arc is mainly a relatively narrow band of either monogenetic or short-lived shield volcanoes. The belt generally averages about 15 km wide and traverses the length of the Shasta segment, roughly 100 km between about the Klamath River drainage on the north, near the Oregon-California border, and the McCloud River drainage on the south (fig. 1). Superposed across this axis are two major long-lived stratovolcanoes and the large rear-arc Medicine Lake volcano. One of the stratovolcanoes, the Rainbow Mountain volcano of about 1.5–0.8 Ma, straddles the arc near the midpoint of the Shasta segment. The other, Mount Shasta itself, which ranges from about 700 ka to 0 ka, lies distinctly west of the High Cascades axis. It is notable that Mount Shasta and Medicine Lake volcanoes, although volcanologically and petrologically quite different, span about the same range of ages and bracket the High Cascades axis on the west and east, respectively.The field trip begins near the southern end of the Shasta segment, where the Lassen Volcanic Center field trip leaves off, in a field of high-alumina olivine tholeiite lavas (HAOTs, referred to elsewhere in this guide as low-potassium olivine tholeiites, LKOTs). It proceeds around the southern, western, and northern flanks of Mount Shasta and onto a part of the arc axis. The stops feature elements of the Mount Shasta area in an approximately chronological order, from oldest to youngest.

  6. Laser direct writing of complex radially varying single-mode polymer waveguide structures

    NASA Astrophysics Data System (ADS)

    Kruse, Kevin; Peng, Jie; Middlebrook, Christopher T.

    2015-07-01

    Increasing board-to-board and chip-to-chip computational data rates beyond 12.5 Gbs will require the use of single-mode polymer waveguides (WGs) that have high bandwidths and are able to be wavelength division multiplexed. Laser direct writing (LDW) of polymer WGs provides a scalable and reconfigurable maskless procedure compared to common photolithography fabrication. LDW of straights and radial curves are readily achieved using predefined drive commands of the two-axis direct drive linear stage system. Using the laser direct write process for advanced WG structures requires stage-drive programming techniques that account for specified polymer material exposure durations. Creating advanced structures such as WG S-bends into single-mode polymer WG builds provides designers with the ability to affect pitch control, optical coupling, and reduce footprint requirements. Fabrication of single-mode polymer WG segmented radial arcs is achieved through a smooth radial arc user-programmed defined mathematical algorithm. Cosine and raised-sine S-bends are realized through a segmentation method where the optimal incremental step length and bend dimensions are controlled to achieve minimal structure loss. Laser direct written S-bends are compared with previously published photolithographic S-bend results using theoretical bend loss models. Fabrication results show that LDW is a viable method in the fabrication of advanced polymer WG structures.

  7. Sensoring fusion data from the optic and acoustic emissions of electric arcs in the GMAW-S process for welding quality assessment.

    PubMed

    Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca

    2012-01-01

    The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms.

  8. A fast optimization approach for treatment planning of volumetric modulated arc therapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong

    2018-05-30

    Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.

  9. A bird's eye view of "Understanding volcanoes in the Vanuatu arc"

    NASA Astrophysics Data System (ADS)

    Vergniolle, S.; Métrich, N.

    2016-08-01

    The Vanuatu intra-oceanic arc, located between 13 and 22°S in the southwest Pacific Ocean (Fig. 1), is one of the most seismically active regions with almost 39 earthquakes magnitude 7 + in the past 43 years (Baillard et al., 2015). Active deformation in both the Vanuatu subduction zone and the back-arc North-Fiji basin accommodates the variation of convergence rates which are c.a. 90-120 mm/yr along most of the arc (Taylor et al., 1995; Pelletier et al., 1998). The convergence rate is slowed down to 25-43 mm/yr (Baillard et al., 2015) in the central segment where the D'Entrecasteaux ridge - an Eocene-Oligocene island arc complex on the Australian subducting plate - collides and is subducted beneath the fore-arc (Taylor et al., 2005). Hence, the Vanuatu arc is segmented in three blocks which move independently; as the north block rotates counter-clockwise in association with rapid back-arc spreading ( 80 mm/year), the central block translates eastward and the south block rotates clockwise (Calmant et al., 2003; Bergeot et al., 2009). (See Fig. 1.)

  10. Sensoring Fusion Data from the Optic and Acoustic Emissions of Electric Arcs in the GMAW-S Process for Welding Quality Assessment

    PubMed Central

    Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca

    2012-01-01

    The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms. PMID:22969330

  11. Volcanoes Distribution in Linear Segmentation of Mariana Arc

    NASA Astrophysics Data System (ADS)

    Andikagumi, H.; Macpherson, C.; McCaffrey, K. J. W.

    2016-12-01

    A new method has been developed to describe better volcanoes distribution pattern within Mariana Arc. A previous study assumed the distribution of volcanoes in the Mariana Arc is described by a small circle distribution which reflects the melting processes in a curved subduction zone. The small circle fit to this dataset used in the study, comprised 12 -mainly subaerial- volcanoes from Smithsonian Institute Global Volcanism Program, was reassessed by us to have a root-mean-square misfit of 2.5 km. The same method applied to a more complete dataset from Baker et al. (2008), consisting 37 subaerial and submarine volcanoes, resulted in an 8.4 km misfit. However, using the Hough Transform method on the larger dataset, lower misfits of great circle segments were achieved (3.1 and 3.0 km) for two possible segments combination. The results indicate that the distribution of volcanoes in the Mariana Arc is better described by a great circle pattern, instead of small circle. Variogram and cross-variogram analysis on volcano spacing and volume shows that there is spatial correlation between volcanoes between 420 and 500 km which corresponds to the maximum segmentation lengths from Hough Transform (320 km). Further analysis of volcano spacing by the coefficient of variation (Cv), shows a tendency toward not-random distribution as the Cv values are closer to zero than one. These distributions are inferred to be associated with the development of normal faults at the back arc as their Cv values also tend towards zero. To analyse whether volcano spacing is random or not, Cv values were simulated using a Monte Carlo method with random input. Only the southernmost segment has allowed us to reject the null hypothesis that volcanoes are randomly spaced at 95% confidence level by 0.007 estimated probability. This result shows infrequent regularity in volcano spacing by chance so that controlling factor in lithospheric scale should be analysed with different approach (not from random number generator). Sunda Arc which has been studied to have en enchelon segmentation and larger number of volcanoes will be further studied to understand particular upper plate influence in volcanoes distribution.

  12. Arc Second Alignment of International X-Ray Observatory Mirror Segments in a Fixed Structure

    NASA Technical Reports Server (NTRS)

    Evans, Tyler C.; Chan, Kai-Wing

    2009-01-01

    The optics for the International X-Ray Observatory (IXO) require alignment and integration of about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arc seconds. These mirror segments are 0.4mm thick, and 200 to 400mm in size, which makes it hard not to impart distortion at the subarc second level. This paper outlines the precise alignment, verification testing, and permanent bonding techniques developed at NASA's Goddard Space Flight Center (GSFC). These techniques are used to overcome the challenge of transferring thin mirror segments from a temporary mount to a fixed structure with arc second alignment and minimal figure distortion. Recent advances in technology development in addition to the automation of several processes have produced significant results. This paper will highlight the recent advances in alignment, testing, and permanent bonding techniques as well as the results they have produced.

  13. Arc-Second Alignment of International X-Ray Observatory Mirror Segments in a Fixed Structure

    NASA Technical Reports Server (NTRS)

    Evans, Tyler C.; Chan, Kai-Wing; Saha, Timo T.

    2010-01-01

    The optics for the International X-Ray Observatory (IXO) require alignment and integration of about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arc-seconds. These mirror segments are 0.4 mm thick, and 200 to 400 mm in size, which makes it hard not to impart distortion at the subare- second level. This paper outlines the precise alignment, verification testing, and permanent bonding techniques developed at NASA's Goddard Space Flight Center (GSFC). These techniques are used to overcome the challenge of transferring thin mirror segments from a temporary mount to a fixed structure with arc-second alignment and minimal figure distortion. Recent advances in technology development in addition to the automation of several processes have produced significant results. This paper will highlight the recent advances in alignment, testing, and permanent bonding techniques as well as the results they have produced.

  14. Line plus arc source trajectories and their R-line coverage for long-object cone-beam imaging with a C-arm system

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Wunderlich, Adam; Dennerlein, Frank; Lauritsch, Günter; Noo, Frédéric

    2011-06-01

    Cone-beam imaging with C-arm systems has become a valuable tool in interventional radiology. Currently, a simple circular trajectory is used, but future applications should use more sophisticated source trajectories, not only to avoid cone-beam artifacts but also to allow extended volume imaging. One attractive strategy to achieve these two goals is to use a source trajectory that consists of two parallel circular arcs connected by a line segment, possibly with repetition. In this work, we address the question of R-line coverage for such a trajectory. More specifically, we examine to what extent R-lines for such a trajectory cover a central cylindrical region of interest (ROI). An R-line is a line segment connecting any two points on the source trajectory. Knowledge of R-line coverage is crucial because a general theory for theoretically exact and stable image reconstruction from axially truncated data is only known for the points in the scanned object that lie on R-lines. Our analysis starts by examining the R-line coverage for the elemental trajectories consisting of (i) two parallel circular arcs and (ii) a circular arc connected orthogonally to a line segment. Next, we utilize our understanding of the R-lines for the aforementioned elemental trajectories to determine the R-line coverage for the trajectory consisting of two parallel circular arcs connected by a tightly fit line segment. For this trajectory, we find that the R-line coverage is insufficient to completely cover any central ROI. Because extension of the line segment beyond the circular arcs helps to increase the R-line coverage, we subsequently propose a trajectory composed of two parallel circular arcs connected by an extended line. We show that the R-lines for this trajectory can fully cover a central ROI if the line extension is long enough. Our presentation includes a formula for the minimum line extension needed to achieve full R-line coverage of an ROI with a specified size, and also includes a preliminary study on the required detector size, showing that the R-lines added by the line extension are not constraining.

  15. Optimal Co-segmentation of Tumor in PET-CT Images with Context Information

    PubMed Central

    Song, Qi; Bai, Junjie; Han, Dongfeng; Bhatia, Sudershan; Sun, Wenqing; Rockey, William; Bayouth, John E.; Buatti, John M.

    2014-01-01

    PET-CT images have been widely used in clinical practice for radiotherapy treatment planning of the radiotherapy. Many existing segmentation approaches only work for a single imaging modality, which suffer from the low spatial resolution in PET or low contrast in CT. In this work we propose a novel method for the co-segmentation of the tumor in both PET and CT images, which makes use of advantages from each modality: the functionality information from PET and the anatomical structure information from CT. The approach formulates the segmentation problem as a minimization problem of a Markov Random Field (MRF) model, which encodes the information from both modalities. The optimization is solved using a graph-cut based method. Two sub-graphs are constructed for the segmentation of the PET and the CT images, respectively. To achieve consistent results in two modalities, an adaptive context cost is enforced by adding context arcs between the two subgraphs. An optimal solution can be obtained by solving a single maximum flow problem, which leads to simultaneous segmentation of the tumor volumes in both modalities. The proposed algorithm was validated in robust delineation of lung tumors on 23 PET-CT datasets and two head-and-neck cancer subjects. Both qualitative and quantitative results show significant improvement compared to the graph cut methods solely using PET or CT. PMID:23693127

  16. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  17. Growth and linkage of the quaternary Ubrique Normal Fault Zone, Western Gibraltar Arc: role on the along-strike relief segmentation

    NASA Astrophysics Data System (ADS)

    Jiménez-Bonilla, Alejandro; Balanya, Juan Carlos; Exposito, Inmaculada; Diaz-Azpiroz, Manuel; Barcos, Leticia

    2015-04-01

    Strain partitioning modes within migrating orogenic arcs may result in arc-parallel stretching that produces along-strike structural and topographic discontinuities. In the Western Gibraltar Arc, arc-parallel stretching has operated from the Lower Miocene up to recent times. In this study, we have reviewed the Colmenar Fault, located at the SW end of the Subbetic ranges, previously interpreted as a Middle Miocene low-angle normal fault. Our results allow to identify younger normal fault segments, to analyse their kinematics, growth and segment linkage, and to discuss its role on the structural and relief drop at regional scale. The Colmenar Fault is folded by post-Serravallian NE-SW buckle folds. Both the SW-dipping fault surfaces and the SW-plunging fold axes contribute to the structural relief drop toward the SW. Nevertheless, at the NW tip of the Colmenar Fault, we have identified unfolded normal faults cutting quaternary soils. They are grouped into a N110˚E striking brittle deformation band 15km long and until 3km wide (hereafter Ubrique Normal Fault Zone; UNFZ). The UNFZ is divided into three sectors: (a) The western tip zone is formed by normal faults which usually dip to the SW and whose slip directions vary between N205˚E and N225˚E. These segments are linked to each other by left-lateral oblique faults interpreted as transfer faults. (b) The central part of the UNFZ is composed of a single N115˚E striking fault segment 2,4km long. Slip directions are around N190˚E and the estimated throw is 1,25km. The fault scarp is well-conserved reaching up to 400m in its central part and diminishing to 200m at both segment terminations. This fault segment is linked to the western tip by an overlap zone characterized by tilted blocks limited by high-angle NNE-SSW and WNW-ESE striking faults interpreted as "box faults" [1]. (c) The eastern tip zone is formed by fault segments with oblique slip which also contribute to the downthrown of the SW block. This kinematic pattern seems to be related to other strike-slip fault systems developed to the E of the UNFZ. The structural revision together with updated kinematic data suggest that the Colmenar Fault is cut and downthrown by a younger normal fault zone, the UNFZ, which would have contributed to accommodate arc-parallel stretching until the Quaternary. This stretching provokes along-strike relief segmentation, being the UNFZ the main fault zone causing the final drop of the Subbetic ranges towards the SW within the Western Gibraltar Arc. Our results show displacement variations in each fault segment of the UNFZ, diminishing to their tips. This suggests fault segment linkage finally evolved to build the nearly continuous current fault zone. The development of current large through-going faults linked inside the UNFZ is similar to those ones simulated in some numerical modelling of rift systems [2]. Acknowledgements: RNM-415 and CGL-2013-46368-P [1]Peacock, D.C.P., Knipe, R.J., Sanderson, D.J., 2000. Glossary of normal faults. Journal Structural Geology, 22, 291-305. [2]Cowie, P.A., Gupta, S., Dawers, N.H., 2000. Implications of fault array evolution for synrift depocentre development: insights from a numerical fault growth model. Basin Research, 12, 241-261.

  18. Validation of Case Finding Algorithms for Hepatocellular Cancer from Administrative Data and Electronic Health Records using Natural Language Processing

    PubMed Central

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2013-01-01

    Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403

  19. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    PubMed

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  20. A continuous arc delivery optimization algorithm for CyberKnife m6.

    PubMed

    Kearney, Vasant; Descovich, Martina; Sudhyadhom, Atchar; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D

    2018-06-01

    This study aims to reduce the delivery time of CyberKnife m6 treatments by allowing for noncoplanar continuous arc delivery. To achieve this, a novel noncoplanar continuous arc delivery optimization algorithm was developed for the CyberKnife m6 treatment system (CyberArc-m6). CyberArc-m6 uses a five-step overarching strategy, in which an initial set of beam geometries is determined, the robotic delivery path is calculated, direct aperture optimization is conducted, intermediate MLC configurations are extracted, and the final beam weights are computed for the continuous arc radiation source model. This algorithm was implemented on five prostate and three brain patients, previously planned using a conventional step-and-shoot CyberKnife m6 delivery technique. The dosimetric quality of the CyberArc-m6 plans was assessed using locally confined mutual information (LCMI), conformity index (CI), heterogeneity index (HI), and a variety of common clinical dosimetric objectives. Using conservative optimization tuning parameters, CyberArc-m6 plans were able to achieve an average CI difference of 0.036 ± 0.025, an average HI difference of 0.046 ± 0.038, and an average LCMI of 0.920 ± 0.030 compared with the original CyberKnife m6 plans. Including a 5 s per minute image alignment time and a 5-min setup time, conservative CyberArc-m6 plans achieved an average treatment delivery speed up of 1.545x ± 0.305x compared with step-and-shoot plans. The CyberArc-m6 algorithm was able to achieve dosimetrically similar plans compared to their step-and-shoot CyberKnife m6 counterparts, while simultaneously reducing treatment delivery times. © 2018 American Association of Physicists in Medicine.

  1. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  2. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  3. Systematic variation in the depths of slabs beneath arc volcanoes

    USGS Publications Warehouse

    England, P.; Engdahl, R.; Thatcher, W.

    2004-01-01

    The depths to the tops of the zones of intermediate-depth seismicity beneath arc volcanoes are determined using the hypocentral locations of Engdahl et al. These depths are constant, to within a few kilometres, within individual arc segments, but differ by tens of kilometres from one arc segment to another. The range in depths is from 65 km to 130 km, inconsistent with the common belief that the volcanoes directly overlie the places where the slabs reach a critical depth that is roughly constant for all arcs. The depth to the top of the intermediate-depth seismicity beneath volcanoes correlates neither with age of the descending ocean floor nor with the thermal parameter of the slab. This depth does, however, exhibit an inverse correlation with the descent speed of the subducting plate, which is the controlling factor both for the thermal structure of the wedge of mantle above the slab and for the temperature at the top of the slab. We interpret this result as indicating that the location of arc volcanoes is controlled by a process that depends critically upon the temperature at the top of the slab, or in the wedge of mantle, immediately below the volcanic arc.

  4. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  5. Geochemical Relationships between Volcanic and Plutonic Upper to Mid Crustal Exposures of the Rosario Segment, Alisitos Arc (Baja California, Mexico): An Outstanding Field Analog to the Izu-Bonin-Mariana Arc

    NASA Astrophysics Data System (ADS)

    Morris, R.; DeBari, S. M.; Busby, C. J.; Medynski, S.

    2015-12-01

    Exposed paleo-arcs, such as the Rosario segment of the Cretaceous Alisitos Arc in Baja California, Mexico, provide an opportunity to explore the evolution of arc crust through time. Remarkable 3-D exposures of the Rosario segment record crustal generation processes in the volcanic rocks and underlying plutonic rocks. In this study, we explore the physical and geochemical connection between the plutonic and volcanic sections of the extensional Alisitos Arc, and elucidate differentiation processes responsible for generating them. These results provide an outstanding analog for extensional active arc systems, such as the Izu-Bonin-Mariana (IBM) Arc. Upper crustal volcanic rocks have a coherent stratigraphy that is 3-5 km thick and ranges in composition from basalt to dacite. The most felsic compositions (70.9% SiO2) are from a welded ignimbrite unit. The most mafic compositions (51.5% SiO2, 3.2% MgO) are found in basaltic sill-like units. Phenocrysts in the volcanic units include plagioclase +/- amphibole and clinopyroxene. The transition to deeper plutonic rocks is clearly an intrusive boundary, where plutonic units intrude the volcanic units. Plutonic rocks are dominantly a quartz diorite main phase with a more mafic, gabbroic margin. A transitional zone is observed along the contact between the plutonic and volcanic rocks, where volcanics have coarsely recrystallized textures. Mineral assemblages in the plutonic units include plagioclase +/- quartz, biotite, amphibole, clinopyroxene and orthopyroxene. Most, but not all, samples are low K. REE patterns are relatively flat with limited enrichment. Normalization diagrams show LILE enrichment and HFSE depletion, where trends are similar to average IBM values. We interpret plutonic and volcanic units to have similar geochemical relationships, where liquid lines of descent show the evolution of least to most evolved magma types. We provide a model for the formation and magmatic evolution of the Alisitos Arc.

  6. A 'Propagating' Active Across-Arc Normal Fault Shows Rupture Process of the Basement: the Case of the Southwestern Ryukyu Arc

    NASA Astrophysics Data System (ADS)

    Matsumoto, T.; Shinjo, R.; Nakamura, M.; Kubo, A.; Doi, A.; Tamanaha, S.

    2011-12-01

    Ryukyu Arc is located on the southwestern extension of Japanese Island-arc towards the east of Taiwan Island along the margin of the Asian continent off China. The island-arc forms an arcuate trench-arc-backarc system. A NW-ward subduction of the Philippine Sea Plate (PSP)at a rate of 6-8 cm/y relative to the Eurasian Plate (EP) causes frequent earthquakes. The PSP is subducting almost normally in the north-central area and more obliquely around the southwestern area. Behind the arc-trench system, the Okinawa Trough (OT) was formed by back-arc rifting, where active hydrothermal vent systems have been discovered. Several across-arc submarine faults are located in the central and southern Ryukyu Arc. The East Ishigaki Fault (EIF) is one of the across-arc normal faults located in the southwestern Ryukyu Arc, ranging by 44km and extending from SE to NW. This fault was surveyed by SEABAT8160 multibeam echo sounder and by ROV Hyper-Dolphin in 2005 and 2008. The result shows that the main fault consists of five fault segments. A branched segment from the main fault was also observed. The southernmost segment is most mature (oldest but still active) and the northernmost one is most nascent. This suggests the north-westward propagation of the fault rupture corresponding to the rifting of the southwestern OT and the southward retreat of the arc-trench system. Considering that the fault is segmented and in some part branched, propagation might take place episodically rather than continuously from SE to NW. The ROV survey also revealed the rupture process of the limestone basement along this fault from the nascent stage to the mature stage. Most of the rock samples collected from the basement outcrop were limestone blocks (or calcareous sedimentary rocks). Limestone basement was observed to the west on the hanging wall far away from the main fault scarp. Then fine-grained sand with ripple marks was observed towards the main scarp. Limestone basement was observed on the main scarp and on the footwall. These suggest that basically the both sides are composed of the same material, that the whole study area is characterised by Ryukyu limestone exposure and that the basement was split by the across-arc normal fault. Coarse-grained sand and gravels/rubbles were observed towards and on the trough of the fault. On the main scarp an outcrop of limestone basement was exposed and in some part it was broken into rubbles. These facts suggest that crash of the basement due to rupturing is taking place repeatedly on the scarp and the trough. The observed fine-grained sand on the hanging wall might be the final product by the process of the crash of the limestone basement.

  7. Seismic evidence for arc segmentation, active magmatic intrusions and syn-rift fault system in the northern Ryukyu volcanic arc

    NASA Astrophysics Data System (ADS)

    Arai, Ryuta; Kodaira, Shuichi; Takahashi, Tsutomu; Miura, Seiichi; Kaneda, Yoshiyuki

    2018-04-01

    Tectonic and volcanic structures of the northern Ryukyu arc are investigated on the basis of multichannel seismic (MCS) reflection data. The study area forms an active volcanic front in parallel to the non-volcanic island chain in the eastern margin of the Eurasian plate and has been undergoing regional extension on its back-arc side. We carried out a MCS reflection experiment along two across-arc lines, and one of the profiles was laid out across the Tokara Channel, a linear bathymetric depression which demarcates the northern and central Ryukyu arcs. The reflection image reveals that beneath this topographic valley there exists a 3-km-deep sedimentary basin atop the arc crust, suggesting that the arc segment boundary was formed by rapid and focused subsidence of the arc crust driven by the arc-parallel extension. Around the volcanic front, magmatic conduits represented by tubular transparent bodies in the reflection images are well developed within the shallow sediments and some of them are accompanied by small fragments of dipping seismic reflectors indicating intruded sills at their bottoms. The spatial distribution of the conduits may suggest that the arc volcanism has multiple active outlets on the seafloor which bifurcate at crustal depths and/or that the location of the volcanic front has been migrating trenchward over time. Further distant from the volcanic front toward the back-arc (> 30 km away), these volcanic features vanish, and alternatively wide rift basins become predominant where rapid transitions from normal-fault-dominant regions to strike-slip-fault-dominant regions occur. This spatial variation in faulting patterns indicates complex stress regimes associated with arc/back-arc rifting in the northern Okinawa Trough.[Figure not available: see fulltext.

  8. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  9. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  10. View of an intact oceanic arc, from surficial to mesozonal levels: Cretaceous Alisitos arc, Baja California

    NASA Astrophysics Data System (ADS)

    Busby, Cathy; Fackler Adams, Benjamin; Mattinson, James; Deoreo, Stephen

    2006-01-01

    The Alisitos arc is an approximately 300 × 30 km oceanic arc terrane that lies in the western wall of the Peninsular Ranges batholith south of the modern Agua Blanca fault zone in Baja California. We have completed detailed mapping and dating of a 50 × 30 km segment of this terrane in the El Rosario to Mission San Fernando areas, as well as reconnaissance mapping and dating in the next 50 × 30 km segment to the north, in the San Quintin area. We recognize two evolutionary phases in this part of the arc terrane: (I) extensional oceanic arc, characterized by intermediate to silicic explosive and effusive volcanism, culminating in caldera-forming silicic ignimbrite eruptions at the onset of arc rifting, and (II) rifted oceanic arc, characterized by mafic effusive and hydroclastic rocks and abundant dike swarms. Two types of units are widespread enough to permit tentative stratigraphic correlation across much of this 100-km-long segment of the arc: a welded dacite ignimbrite (tuff of Aguajito), and a deepwater debris-avalanche deposit. New U-Pb zircon data from the volcanic and plutonic rocks of both phases indicate that the entire 4000-m-thick section accumulated in about 1.5 MY, at 111-110 MY. Southwestern North American sources for two zircon grains with Proterozoic 206Pb / 207Pb ages support the interpretation that the oceanic arc fringed North America rather than representing an exotic terrane. The excellent preservation and exposure of the Alistos arc terrane makes it ideal for three-dimensional study of the structural, stratigraphic and intrusive history of an oceanic arc terrane. The segment mapped and dated in detail has a central major subaerial edifice, flanked by a down-faulted deepwater marine basin to the north, and a volcano-bounded shallow-water marine basin to the south. The rugged down-faulted flank of the edifice produced mass wasting, plumbed large-volume eruptions to the surface, and caused pyroclastic flows to disintegrate into turbulent suspensions that mixed completely with water. In contrast, gentler slopes on the opposite flank allowed pyroclastic flows to enter the sea with integrity, and supported extensive buildups of bioherms. Caldera collapse on the major subaerial edifice ponded the tuff of Aguajito to a thickness of at least 3 km. The outflow ignimbrite forms a marker in nonmarine to shallow marine sections, and in deepwater sections it occurs as blocks up to 150 m long in a debris-avalanche deposit. These welded ignimbrite blocks were deposited hot enough to deform plastically and form peperite with the debris-avalanche matrix. The debris avalanche was likely triggered by injection of feeder dikes along the basin-bounding fault zone during the caldera-forming eruption. Intra-arc extension controlled very high subsidence rates, followed shortly thereafter by accretion through back-arc basin closure by 105 Ma. Accretion of the oceanic arc may have been accomplished by detachment of the upper crust along a still hot, thick middle crustal tonalitic layer, during subduction of mafic-ultramafic substrate.

  11. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.

  12. The Effect of Arc Proximity on Hydrothermal Activity Along Spreading Centers: New Evidence From the Mariana Back Arc (12.7°N-18.3°N)

    NASA Astrophysics Data System (ADS)

    Baker, Edward T.; Walker, Sharon L.; Resing, Joseph A.; Chadwick, William W.; Merle, Susan G.; Anderson, Melissa O.; Butterfield, David A.; Buck, Nathan J.; Michael, Susanna

    2017-11-01

    Back-arc spreading centers (BASCs) form a distinct class of ocean spreading ridges distinguished by steep along-axis gradients in spreading rate and by additional magma supplied through subduction. These characteristics can affect the population and distribution of hydrothermal activity on BASCs compared to mid-ocean ridges (MORs). To investigate this hypothesis, we comprehensively explored 600 km of the southern half of the Mariana BASC. We used water column mapping and seafloor imaging to identify 19 active vent sites, an increase of 13 over the current listing in the InterRidge Database (IRDB), on the bathymetric highs of 7 of the 11 segments. We identified both high and low (i.e., characterized by a weak or negligible particle plume) temperature discharge occurring on segment types spanning dominantly magmatic to dominantly tectonic. Active sites are concentrated on the two southernmost segments, where distance to the adjacent arc is shortest (<40 km), spreading rate is highest (>48 mm/yr), and tectonic extension is pervasive. Re-examination of hydrothermal data from other BASCs supports the generalization that hydrothermal site density increases on segments <90 km from an adjacent arc. Although exploration quality varies greatly among BASCs, present data suggest that, for a given spreading rate, the mean spatial density of hydrothermal activity varies little between MORs and BASCs. The present global database, however, may be misleading. On both BASCs and MORs, the spatial density of hydrothermal sites mapped by high-quality water-column surveys is 2-7 times greater than predicted by the existing IRDB trend of site density versus spreading rate.

  13. MO-FG-CAMPUS-TeP2-05: Optimizing Stereotactic Radiosurgery Treatment of Multiple Brain Metastasis Lesions with Individualized Rotational Arc Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, P; Xing, L; Ma, L

    Purpose: Radiosurgery of multiple (n>4) brain metastasis lesions requires 3–4 noncoplanar VMAT arcs with excessively high monitor units and long delivery time. We investigated whether an improved optimization technique would decrease the needed arc numbers and increase the delivery efficiency, while improving or maintaining the plan quality. Methods: The proposed 4pi arc space optimization algorithm consists of two steps: automatic couch angle selection followed by aperture generation for each arc with optimized control points distribution. We use a greedy algorithm to select the couch angles. Starting from a single coplanar arc plan we search through the candidate noncoplanar arcs tomore » pick a single noncoplanar arc that will bring the best plan quality when added into the existing treatment plan. Each time, only one additional noncoplanar arc is considered making the calculation time tractable. This process repeats itself until desired number of arc is reached. The technique is first evaluated in coplanar arc delivery scheme with testing cases and then applied to noncoplanar treatments of a case with 12 brain metastasis lesions. Results: Clinically acceptable plans are created within minutes. For the coplanar testing cases the algorithm yields singlearc plans with better dose distributions than that of two-arc VMAT, simultaneously with a 12–17% reduction in the delivery time and a 14–21% reduction in MUs. For the treatment of 12 brain mets while Paddick conformity indexes of the two plans were comparable the SCG-optimization with 2 arcs (1 noncoplanar and 1 coplanar) significantly improved the conventional VMAT with 3 arcs (2 noncoplanar and 1 coplanar). Specifically V16 V10 and V5 of the brain were reduced by 11%, 11% and 12% respectively. The beam delivery time was shortened by approximately 30%. Conclusion: The proposed 4pi arc space optimization technique promises to significantly reduce the brain toxicity while greatly improving the treatment efficiency.« less

  14. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  15. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  16. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  17. 3D segmentations of neuronal nuclei from confocal microscope image stacks

    PubMed Central

    LaTorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; DeFelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario—the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei. PMID:24409123

  18. 3D segmentations of neuronal nuclei from confocal microscope image stacks.

    PubMed

    Latorre, Antonio; Alonso-Nanclares, Lidia; Muelas, Santiago; Peña, José-María; Defelipe, Javier

    2013-01-01

    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario-the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei.

  19. Researches on Position Detection for Vacuum Switch Electrode

    NASA Astrophysics Data System (ADS)

    Dong, Huajun; Guo, Yingjie; Li, Jie; Kong, Yihan

    2018-03-01

    Form and transformation character of vacuum arc is important influencing factor on the vacuum switch performance, and the dynamic separations of electrode is the chief effecting factor on the transformation of vacuum arcs forms. Consequently, how to detect the position of electrode to calculate the separations in the arcs image is of great significance. However, gray level distribution of vacuum arcs image isn’t even, the gray level of burning arcs is high, but the gray level of electrode is low, meanwhile, the forms of vacuum arcs changes sharply, the problems above restrict electrode position detection precisely. In this paper, algorithm of detecting electrode position base on vacuum arcs image was proposed. The digital image processing technology was used in vacuum switch arcs image analysis, the upper edge and lower edge were detected respectively, then linear fitting was done using the result of edge detection, the fitting result was the position of electrode, thus, accurate position detection of electrode was realized. From the experimental results, we can see that: algorithm described in this paper detected upper and lower edge of arcs successfully and the position of electrode was obtained through calculation.

  20. An interactive modeling program for the generation of planar polygons for boundary type solids representations of wire frame models

    NASA Technical Reports Server (NTRS)

    Ozsoy, T.; Ochs, J. B.

    1984-01-01

    The development of a general link between three dimensional wire frame models and rigid solid models is discussed. An interactive computer graphics program was developed to serve as a front end to an algorithm (COSMIC Program No. ARC-11446) which offers a general solution to the hidden line problem where the input data is either line segments of n-sided planar polygons of the most general type with internal boundaries. The program provides a general interface to CAD/CAM data bases and is implemented for models created on the Unigraphics VAX 11/780-based CAD/CAM systems with the display software written for DEC's VS11 color graphics devices.

  1. Geologic framework of the Aleutian arc, Alaska

    USGS Publications Warehouse

    Vallier, Tracy L.; Scholl, David W.; Fisher, Michael A.; Bruns, Terry R.; Wilson, Frederic H.; von Huene, Roland E.; Stevenson, Andrew J.

    1994-01-01

    The Aleutian arc is the arcuate arrangement of mountain ranges and flanking submerged margins that forms the northern rim of the Pacific Basin from the Kamchatka Peninsula (Russia) eastward more than 3,000 km to Cooke Inlet (Fig. 1). It consists of two very different segments that meet near Unimak Pass: the Aleutian Ridge segment to the west and the Alaska Peninsula-the Kodiak Island segment to the east. The Aleutian Ridge segment is a massive, mostly submerged cordillera that includes both the islands and the submerged pedestal from which they protrude. The Alaska Peninsula-Kodiak Island segment is composed of the Alaska Peninsula, its adjacent islands, and their continental and insular margins. The Bering Sea margin north of the Alaska Peninsula consists mostly of a wide continental shelf, some of which is underlain by rocks correlative with those on the Alaska Peninsula.There is no pre-Eocene record in rocks of the Aleutian Ridge segment, whereas rare fragments of Paleozoic rocks and extensive outcrops of Mesozoic rocks occur on the Alaska Peninsula. Since the late Eocene, and possibly since the early Eocene, the two segments have evolved somewhat similarly. Major plutonic and volcanic episodes, however, are not synchronous. Furthermore, uplift of the Alaska Peninsula-Kodiak Island segment in late Cenozoic time was more extensive than uplift of the Aleutian Ridge segment. It is probable that tectonic regimes along the Aleutian arc varied during the Tertiary in response to such factors as the directions and rates of convergence, to bathymetry and age of the subducting Pacific Plate, and to the volume of sediment in the Aleutian Trench.The Pacific and North American lithospheric plates converge along the inner wall of the Aleutian trench at about 85 to 90 mm/yr. Convergence is nearly at right angles along the Alaska Peninsula, but because of the arcuate shape of the Aleutian Ridge relative to the location of the plates' poles of rotation, the angle of convergence lessens to the west (Minster and Jordan, 1978). Along the central Aleutian Ridge, underthrusting is about 30° from normal to the volcanic axis. Motion between plates is approximately parallel along the western Aleutian Ridge.In this paper we briefly describe and interpret the Cenozoic evolution of the Aleutian arc by focusing on the onshore and offshore geologic frameworks in four of its sectors, two sectors each from the Aleutian Ridge and Alaska Peninsula-Kodiak Island segments (Fig. 1). We compare the geologic evolution of the segments and comment on the implications of some new, previously unpublished data.

  2. A Hybrid Ant Colony Optimization Algorithm for the Extended Capacitated Arc Routing Problem.

    PubMed

    Li-Ning Xing; Rohlfshagen, P; Ying-Wu Chen; Xin Yao

    2011-08-01

    The capacitated arc routing problem (CARP) is representative of numerous practical applications, and in order to widen its scope, we consider an extended version of this problem that entails both total service time and fixed investment costs. We subsequently propose a hybrid ant colony optimization (ACO) algorithm (HACOA) to solve instances of the extended CARP. This approach is characterized by the exploitation of heuristic information, adaptive parameters, and local optimization techniques: Two kinds of heuristic information, arc cluster information and arc priority information, are obtained continuously from the solutions sampled to guide the subsequent optimization process. The adaptive parameters ease the burden of choosing initial values and facilitate improved and more robust results. Finally, local optimization, based on the two-opt heuristic, is employed to improve the overall performance of the proposed algorithm. The resulting HACOA is tested on four sets of benchmark problems containing a total of 87 instances with up to 140 nodes and 380 arcs. In order to evaluate the effectiveness of the proposed method, some existing capacitated arc routing heuristics are extended to cope with the extended version of this problem; the experimental results indicate that the proposed ACO method outperforms these heuristics.

  3. Variable dose rate single-arc IMAT delivered with a constant dose rate and variable angular spacing

    NASA Astrophysics Data System (ADS)

    Tang, Grace; Earl, Matthew A.; Yu, Cedric X.

    2009-11-01

    Single-arc intensity-modulated arc therapy (IMAT) has gained worldwide interest in both research and clinical implementation due to its superior plan quality and delivery efficiency. Single-arc IMAT techniques such as the Varian RapidArc™ deliver conformal dose distributions to the target in one single gantry rotation, resulting in a delivery time in the order of 2 min. The segments in these techniques are evenly distributed within an arc and are allowed to have different monitor unit (MU) weightings. Therefore, a variable dose-rate (VDR) is required for delivery. Because the VDR requirement complicates the control hardware and software of the linear accelerators (linacs) and prevents most existing linacs from delivering IMAT, we propose an alternative planning approach for IMAT using constant dose-rate (CDR) delivery with variable angular spacing. We prove the equivalence by converting VDR-optimized RapidArc plans to CDR plans, where the evenly spaced beams in the VDR plan are redistributed to uneven spacing such that the segments with larger MU weighting occupy a greater angular interval. To minimize perturbation in the optimized dose distribution, the angular deviation of the segments was restricted to <=± 5°. This restriction requires the treatment arc to be broken into multiple sectors such that the local MU fluctuation within each sector is reduced, thereby lowering the angular deviation of the segments during redistribution. The converted CDR plans were delivered with a single gantry sweep as in the VDR plans but each sector was delivered with a different value of CDR. For four patient cases, including two head-and-neck, one brain and one prostate, all CDR plans developed with the variable spacing scheme produced similar dose distributions to the original VDR plans. For plans with complex angular MU distributions, the number of sectors increased up to four in the CDR plans in order to maintain the original plan quality. Since each sector was delivered with a different dose rate, extra mode-up time (xMOT) was needed between the transitions of the successive sectors during delivery. On average, the delivery times of the CDR plans were approximately less than 1 min longer than the treatment times of the VDR plans, with an average of about 0.33 min of xMOT per sector transition. The results have shown that VDR may not be necessary for single-arc IMAT. Using variable angular spacing, VDR RapidArc plans can be implemented into the clinics that are not equipped with the new VDR-enabled machines without compromising the plan quality or treatment efficiency. With a prospective optimization approach using variable angular spacing, CDR delivery times can be further minimized while maintaining the high delivery efficiency of single-arc IMAT treatment.

  4. Note: Arc discharge plasma source with plane segmented LaB{sub 6} cathode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhmetov, T. D., E-mail: t.d.akhmetov@inp.nsk.su; Davydenko, V. I.; Ivanov, A. A.

    2016-05-15

    A plane cathode composed of close-packed hexagonal LaB{sub 6} (lanthanum hexaboride) segments is described. The 6 cm diameter circular cathode is heated by radiation from a graphite foil flat spiral. The cathode along with a hollow copper anode is used for the arc discharge plasma production in a newly developed linear plasma device. A separately powered coil located around the anode is used to change the magnetic field strength and geometry in the anode region. Different discharge regimes were realized using this coil.

  5. An arc tangent function demodulation method of fiber-optic Fabry-Perot high-temperature pressure sensor

    NASA Astrophysics Data System (ADS)

    Ren, Qianyu; Li, Junhong; Hong, Yingping; Jia, Pinggang; Xiong, Jijun

    2017-09-01

    A new demodulation algorithm of the fiber-optic Fabry-Perot cavity length based on the phase generated carrier (PGC) is proposed in this paper, which can be applied in the high-temperature pressure sensor. This new algorithm based on arc tangent function outputs two orthogonal signals by utilizing an optical system, which is designed based on the field-programmable gate array (FPGA) to overcome the range limit of the original PGC arc tangent function demodulation algorithm. The simulation and analysis are also carried on. According to the analysis of demodulation speed and precision, the simulation of different numbers of sampling points, and measurement results of the pressure sensor, the arc tangent function demodulation method has good demodulation results: 1 MHz processing speed of single data and less than 1% error showing practical feasibility in the fiber-optic Fabry-Perot cavity length demodulation of the Fabry-Perot high-temperature pressure sensor.

  6. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  7. A Corrosion Risk Assessment Model for Underground Piping

    NASA Technical Reports Server (NTRS)

    Datta, Koushik; Fraser, Douglas R.

    2009-01-01

    The Pressure Systems Manager at NASA Ames Research Center (ARC) has embarked on a project to collect data and develop risk assessment models to support risk-informed decision making regarding future inspections of underground pipes at ARC. This paper shows progress in one area of this project - a corrosion risk assessment model for the underground high-pressure air distribution piping system at ARC. It consists of a Corrosion Model of pipe-segments, a Pipe Wrap Protection Model; and a Pipe Stress Model for a pipe segment. A Monte Carlo simulation of the combined models provides a distribution of the failure probabilities. Sensitivity study results show that the model uncertainty, or lack of knowledge, is the dominant contributor to the calculated unreliability of the underground piping system. As a result, the Pressure Systems Manager may consider investing resources specifically focused on reducing these uncertainties. Future work includes completing the data collection effort for the existing ground based pressure systems and applying the risk models to risk-based inspection strategies of the underground pipes at ARC.

  8. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound.

    PubMed

    Hann, Alexander; Bettac, Lucas; Haenle, Mark M; Graeter, Tilmann; Berger, Andreas W; Dreyhaupt, Jens; Schmalstieg, Dieter; Zoller, Wolfram G; Egger, Jan

    2017-10-06

    Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.

  9. A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations

    PubMed Central

    Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary

    2016-01-01

    There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699

  10. Technical Note: A fast online adaptive replanning method for VMAT using flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, Ozgur; Ahunbay, Ergun E.; Li, X. Allen, E-mail: ali@mcw.edu

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT research planning system, which enables the input and output of beam and machine parameters of VMAT plans. The SAM algorithm was used to modify multileaf collimator positions for each segment aperture based on the changes of the target from the planning (CT/MR) to daily image [CT/CBCT/magnetic resonance imaging (MRI)]. The leaf travel distance was controlled for large shifts to prevent themore » increase of VMAT delivery time. The SAM algorithm was tested for 11 patient cases including prostate, pancreatic, and lung cancers. For each daily image set, three types of VMAT plans, image-guided radiation therapy (IGRT) repositioning, SAM adaptive, and full-scope reoptimization plans, were generated and compared. Results: The SAM adaptive plans were found to have improved the plan quality in target and/or critical organs when compared to the IGRT repositioning plans and were comparable to the reoptimization plans based on the data of planning target volume (PTV)-V100 (volume covered by 100% of prescription dose). For the cases studied, the average PTV-V100 was 98.85% ± 1.13%, 97.61% ± 1.45%, and 92.84% ± 1.61% with FFF beams for the reoptimization, SAM adaptive, and repositioning plans, respectively. The execution of the SAM algorithm takes less than 10 s using 16-CPU (2.6 GHz dual core) hardware. Conclusions: The SAM algorithm can generate adaptive VMAT plans using FFF beams with comparable plan qualities as those from the full-scope reoptimization plans based on daily CT/CBCT/MRI and can be used for online replanning to address interfractional variations.« less

  11. Advent of Continents: A New Hypothesis

    PubMed Central

    Tamura, Yoshihiko; Sato, Takeshi; Fujiwara, Toshiya; Kodaira, Shuichi; Nichols, Alexander

    2016-01-01

    The straightforward but unexpected relationship presented here relates crustal thickness to magma type in the Izu-Ogasawara (Bonin) and Aleutian oceanic arcs. Volcanoes along the southern segment of the Izu-Ogasawara arc and the western Aleutian arc (west of Adak) are underlain by thin crust (10–20 km). In contrast those along the northern segment of the Izu-Ogasawara arc and eastern Aleutian arc are underlain by crust ~35 km thick. Interestingly, andesite magmas dominate eruptive products from the former volcanoes and mostly basaltic lavas erupt from the latter. According to the hypothesis presented here, rising mantle diapirs stall near the base of the oceanic crust at depths controlled by the thickness of the overlying crust. Where the crust is thin, melting occurs at relatively low pressures in the mantle wedge producing andesitic magmas. Where the crust is thick, melting pressures are higher and only basaltic magmas tend to be produced. The implications of this hypothesis are: (1) the rate of continental crust accumulation, which is andesitic in composition, would have been greatest soon after subduction initiated on Earth, when most crust was thin; and (2) most andesite magmas erupted on continental crust could be recycled from “primary” andesite originally produced in oceanic arcs. PMID:27669662

  12. Flat panel detector-based cone beam computed tomography with a circle-plus-two-arcs data acquisition orbit: preliminary phantom study.

    PubMed

    Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng

    2003-07-01

    Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.

  13. Reconstructing in space and time the closure of the middle and western segments of the Bangong-Nujiang Tethyan Ocean in the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Fan, Jian-Jun; Li, Cai; Wang, Ming; Xie, Chao-Ming

    2018-01-01

    When and how the Bangong-Nujiang Tethyan Ocean closed is a highly controversial subject. In this paper, we present a detailed study and review of the Cretaceous ophiolites, ocean islands, and flysch deposits in the middle and western segments of the Bangong-Nujiang suture zone (BNSZ), and the Cretaceous volcanic rocks, late Mesozoic sediments, and unconformities within the BNSZ and surrounding areas. Our aim was to reconstruct the spatial-temporal patterns of the closing of the middle and western segments of the Bangong-Nujiang Tethyan Ocean. Our conclusion is that the closure of the ocean started during the Late Jurassic and was mainly complete by the end of the Early Cretaceous. The closure of the ocean involved both "longitudinal diachronous closure" from north to south and "transverse diachronous closure" from east to west. The spatial-temporal patterns of the closure process can be summarized as follows: the development of the Bangong-Nujiang Tethyan oceanic lithosphere and its subduction started before the Late Jurassic; after the Late Jurassic, the ocean began to close because of the compressional regime surrounding the BNSZ; along the northern margin of the Bangong-Nujiang Tethyan Ocean, collisions involving the arcs, back-arc basins, and marginal basins of a multi-arc basin system first took place during the Late Jurassic-early Early Cretaceous, resulting in regional uplift and the regional unconformity along the northern margin of the ocean and in the Southern Qiangtang Terrane on the northern side of the ocean. However, the closure of the Bangong-Nujiang Tethyan Ocean cannot be attributed to these arc-arc and arc-continent collisions, because subduction and the development of the Bangong-Nujiang Tethyan oceanic lithosphere continued until the late Early Cretaceous. The gradual closure of the middle and western segments of Bangong-Nujiang Tethyan Ocean was diachronous from east to west, starting in the east in the middle Early Cretaceous, and being mainly complete by the end of the Early Cretaceous. The BNSZ and its surrounding areas underwent orogenic uplift during the Late Cretaceous.

  14. Alignment and Integration of Lightweight Mirror Segments

    NASA Technical Reports Server (NTRS)

    Evans, Tyler; Biskach, Michael; Mazzarella, Jim; McClelland, Ryan; Saha, Timo; Zhang, Will; Chan, Kai-Wing

    2011-01-01

    The optics for the International X-Ray Observatory (IXO) require alignment and integration of about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arc-seconds. These mirror segments are 0.4 mm thick, and 200 to 400 mm in size, which makes it difficult not to impart distortion at the sub-arc-second level. This paper outlines the precise alignment, permanent bonding, and verification testing techniques developed at NASA's Goddard Space Flight Center (GSFC). Improvements in alignment include new hardware and automation software. Improvements in bonding include two module new simulators to bond mirrors into, a glass housing for proving single pair bonding, and a Kovar module for bonding multiple pairs of mirrors. Three separate bonding trials were x-ray tested producing results meeting the requirement of sub ten arc-second alignment. This paper will highlight these recent advances in alignment, testing, and bonding techniques and the exciting developments in thin x-ray optic technology development.

  15. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth

    PubMed Central

    A., Javadpour; A., Mohammadi

    2016-01-01

    Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629

  16. Incorporating geometric ray tracing to generate initial conditions for intensity modulated arc therapy optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, Mike; Gladwish, Adam; Craig, Jeff

    2008-07-15

    Purpose and background: Intensity modulated arc therapy (IMAT) is a rotational variant of Intensity modulated radiation therapy (IMRT) that is achieved by allowing the multileaf collimator (MLC) positions to vary as the gantry rotates around the patient. This work describes a method to generate an IMAT plan through the use of a fast ray tracing technique based on dosimetric and geometric information for setting initial MLC leaf positions prior to final IMAT optimization. Methods and materials: Three steps were used to generate an IMAT plan. The first step was to generate arcs based on anatomical contours. The second step wasmore » to generate ray importance factor (RIF) maps by ray tracing the dose distribution inside the planning target volume (PTV) to modify the MLC leaf positions of the anatomical arcs to reduce the maximum dose inside the PTV. The RIF maps were also segmented to create a new set of arcs to improve the dose to low dose voxels within the PTV. In the third step, the MLC leaf positions from all arcs were put through a leaf position optimization (LPO) algorithm and brought into a fast Monte Carlo dose calculation engine for a final dose calculation. The method was applied to two phantom cases, a clinical prostate case and the Radiological Physics Center (RPC)'s head and neck phantom. The authors assessed the plan improvements achieved by each step and compared plans with and without using RIF. They also compared the IMAT plan with an IMRT plan for the RPC phantom. Results: All plans that incorporated RIF and LPO had lower objective function values than those that incorporated LPO only. The objective function value was reduced by about 15% after the generation of RIF arcs and 52% after generation of RIF arcs and leaf position optimization. The IMAT plan for the RPC phantom had similar dose coverage for PTV1 and PTV2 (the same dose volume histogram curves), however, slightly lower dose to the normal tissues compared to a six-field IMRT plan. Conclusion: The use of a ray importance factor can generate initial IMAT arcs efficiently for further MLC leaf position optimization to obtain more favorable IMAT plan.« less

  17. Output-Sensitive Construction of Reeb Graphs.

    PubMed

    Doraiswamy, H; Natarajan, V

    2012-01-01

    The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.

  18. An ablative pulsed plasma thruster with a segmented anode

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Ren, Junxue; Tang, Haibin; Ling, William Yeong Liang; York, Thomas M.

    2018-01-01

    An ablative pulsed plasma thruster (APPT) design with a ‘segmented anode’ is proposed in this paper. We aim to examine the effect that this asymmetric electrode configuration (a normal cathode and a segmented anode) has on the performance of an APPT. The magnetic field of the discharge arc, plasma density in the exit plume, impulse bit, and thrust efficiency were studied using a magnetic probe, Langmuir probe, thrust stand, and mass bit measurements, respectively. When compared with conventional symmetric parallel electrodes, the segmented anode APPT shows an improvement in the impulse bit of up to 28%. The thrust efficiency is also improved by 49% (from 5.3% to 7.9% for conventional and segmented designs, respectively). Long-exposure broadband emission images of the discharge morphology show that compared with a normal anode, a segmented anode results in clear differences in the luminous discharge morphology and better collimation of the plasma. The magnetic probe data indicate that the segmented anode APPT exhibits a higher current density in the discharge arc. Furthermore, Langmuir probe data collected from the central exit plane show that the peak electron density is 75% higher than with conventional parallel electrodes. These results are believed to be fundamental to the physical mechanisms behind the increased impulse bit of an APPT with a segmented electrode.

  19. Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.

    PubMed

    Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu

    2017-05-23

    This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.

  20. A novel line segment detection algorithm based on graph search

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  1. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  2. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  3. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  4. Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT

    PubMed Central

    Guo, Wei; Li, Qiang

    2014-01-01

    Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393

  5. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  6. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  7. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  8. The Detection and Statistics of Giant Arcs behind CLASH Clusters

    NASA Astrophysics Data System (ADS)

    Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Seitz, Stella; Zitrin, Adi; Merten, Julian; Maoz, Dani; Frye, Brenda; Umetsu, Keiichi; Zheng, Wei; Bradley, Larry; Vega, Jesus; Koekemoer, Anton

    2016-02-01

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ and length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift zs = 1.9 with 33% of the detected arcs having zs > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c-M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.

  9. Differentialless geometry of plane curves

    NASA Astrophysics Data System (ADS)

    Latecki, Longin J.; Rosenfeld, Azriel

    1997-10-01

    We introduce a class of planar arcs and curves, called tame arcs, which is general enough to describe the boundaries of planar real objects. A tame arc can have smooth parts as well as sharp corners; thus a polygonal arc is tame. On the other hand, this class of arcs is restrictive enough to rule out pathological arcs which have infinitely many inflections or which turn infinitely often: a tame arc can have only finitely many inflections, and its total absolute turn must be finite. In order to relate boundary properties of discrete objects obtained by segmenting digital images to the corresponding properties of their continuous originals, the theory of tame arcs is based on concepts that can be directly transferred from the continuous to the discrete domain. A tame arc is composed of a finite number of supported arcs. We define supported digital arcs and motivate their definition by the fact that hey can be obtained by digitizing continuous supported arcs. Every digital arc is tame, since it contains a finite number of points, and therefore it can be decomposed into a finite number of supported digital arcs.

  10. Segmentation of plate coupling, fate of subduction fluids, and modes of arc magmatism in Cascadia, inferred from magnetotelluric resistivity

    USGS Publications Warehouse

    Wannamaker, Philip E.; Evans, Rob L.; Bedrosian, Paul A.; Unsworth, Martyn J.; Maris, Virginie; McGary, R. Shane

    2014-01-01

    Five magnetotelluric (MT) profiles have been acquired across the Cascadia subduction system and transformed using 2-D and 3-D nonlinear inversion to yield electrical resistivity cross sections to depths of ∼200 km. Distinct changes in plate coupling, subduction fluid evolution, and modes of arc magmatism along the length of Cascadia are clearly expressed in the resistivity structure. Relatively high resistivities under the coasts of northern and southern Cascadia correlate with elevated degrees of inferred plate locking, and suggest fluid- and sediment-deficient conditions. In contrast, the north-central Oregon coastal structure is quite conductive from the plate interface to shallow depths offshore, correlating with poor plate locking and the possible presence of subducted sediments. Low-resistivity fluidized zones develop at slab depths of 35–40 km starting ∼100 km west of the arc on all profiles, and are interpreted to represent prograde metamorphic fluid release from the subducting slab. The fluids rise to forearc Moho levels, and sometimes shallower, as the arc is approached. The zones begin close to clusters of low-frequency earthquakes, suggesting fluid controls on the transition to steady sliding. Under the northern and southern Cascadia arc segments, low upper mantle resistivities are consistent with flux melting above the slab plus possible deep convective backarc upwelling toward the arc. In central Cascadia, extensional deformation is interpreted to segregate upper mantle melts leading to underplating and low resistivities at Moho to lower crustal levels below the arc and nearby backarc. The low- to high-temperature mantle wedge transition lies slightly trenchward of the arc.

  11. Numerical simulation of a helical shape electric arc in the external axial magnetic field

    NASA Astrophysics Data System (ADS)

    Urusov, R. M.; Urusova, I. R.

    2016-10-01

    Within the frameworks of non-stationary three-dimensional mathematical model, in approximation of a partial local thermodynamic equilibrium, a numerical calculation was made of characteristics of DC electric arc burning in a cylindrical channel in the uniform external axial magnetic field. The method of numerical simulation of the arc of helical shape in a uniform external axial magnetic field was proposed. This method consists in that that in the computational algorithm, a "scheme" analog of fluctuations for electrons temperature is supplemented. The "scheme" analogue of fluctuations increases a weak numerical asymmetry of electrons temperature distribution, which occurs randomly in the course of computing. This asymmetry can be "picked up" by the external magnetic field that continues to increase up to a certain value, which is sufficient for the formation of helical structure of the arc column. In the absence of fluctuations in the computational algorithm, the arc column in the external axial magnetic field maintains cylindrical axial symmetry, and a helical form of the arc is not observed.

  12. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  13. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  14. Construction of Continental Crust at the Central American and Philippines Arc Systems

    NASA Astrophysics Data System (ADS)

    Whattam, S. A.; Stern, R. J.

    2016-12-01

    Whether or not magmatic arcs evolve compositionally with time and the processes responsible remain controversial. Resolution of this question requires reconstructing arc geochemical evolution at the level of discrete arc systems, as has been done for IBM, Central America, and the Greater Antilles. Emphasis should be on arcs built on oceanic crust because interaction with continental crust complicates interpretations. The Philippines are a particularly attractive target because this may be the best example where proto-continental crust has been generated and processed in Cretaceous and younger time. Here, we show how this question could be addressed for the Philippines using the well-studied Central American Volcanic Arc System (CAVAS) as an example. For the CAVAS, we avoided the northern arc segment because these are (Guatemala) or maybe (El Salvador) sections built on continental crust. Geochemical and isotopic data were compiled for 1031 samples of lavas and intrusive rocks from the 1100 km-long segment built on thickened, initially plume-derived oceanic crust over its 75 million year lifespan (Panama, Costa Rica, Nicaragua) . The most striking observation is the overall evolution of the CAVAS to more incompatible element enriched and ultimately continental-like compositions with time. Models entailing progressive arc magmatic enrichment are generally supported by the CAVAS record. Progressive enrichment of the oceanic CAVAS with time reflects changes in mantle wedge composition and decreased melting due to arc crust thickening, which was kick-started by the involvement of enriched plume mantle. Progressive crustal thickening and associated changes in the sub-arc thermal regime resulted in decreasing degrees of partial melting over time, which allowed for progressive enrichment of the CAVAS and ultimately the production of continental-like crust in Panama and Costa Rica by 16-10 Ma. Our similar study of the Philippine Arc system is in its infancy but earlier studies have shown that older magmatic rocks are tholeiitic and MORB-like whereas younger ones are invariably calc-alkaline and arc-like. Results of the Philippines Arc study will be compared with the CAVAS and other magmatic arc systems comprised of continental crust.

  15. Automatic segmentation of bones from digital hand radiographs

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia

    1995-05-01

    The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.

  16. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  17. Structural and Functional Changes Associated with Normal and Abnormal Fundus Autofluorescence in Patients with Retinitis Pigmentosa

    PubMed Central

    Greenstein, Vivienne C.; Duncker, Tobias; Holopigian, Karen; Carr, Ronald E.; Greenberg, Jonathan; Tsang, Stephen H.; Hood, Donald C.

    2013-01-01

    Purpose To analyze the structure and visual function of regions bordering the hyperautofluorescent ring/arcs in retinitis pigmentosa (RP). Methods Twenty -one RP patients (21 eyes) with rings/arcs and 21 normals (21 eyes) were studied. Visual sensitivity in the central 10° was measured with microperimetry. Retinal structure was evaluated with spectral domain optical coherence tomography (SD-OCT). The distance from the fovea to disruption/loss of the inner outer segment (IS/OS) junction and thicknesses of the total receptor plus retinal pigment epithelial (RPE) complex (R+), and outer segment plus RPE complex (OS+) layers were measured. Results were compared to measurements of the distance from the fovea to the inner and outer borders of the ring/arc seen on fundus autofluorescence (FAF). Results Disruption/loss of the IS/OS junction occurred closer to the inner border of the ring/arc and it was closer to the fovea in 8 eyes. For 19 eyes, OS+ and R+ thicknesses were significantly decreased at locations closer to the fovea than the appearance of the inner border of hyperautofluorescence. Mean visual sensitivity was decreased inside, across and outside the ring/arc by 3.5 ± 3.8, 8.9 ± 4.8 and 17.0±2.4 dB respectively. Conclusions Structural and functional changes can occur inside the hyperfluorescent ring/arc in RP. PMID:21909055

  18. Elaboration of a semi-automated algorithm for brain arteriovenous malformation segmentation: initial results.

    PubMed

    Clarençon, Frédéric; Maizeroi-Eugène, Franck; Bresson, Damien; Maingreaud, Flavien; Sourour, Nader; Couquet, Claude; Ayoub, David; Chiras, Jacques; Yardin, Catherine; Mounayer, Charbel

    2015-02-01

    The purpose of our study was to distinguish the different components of a brain arteriovenous malformation (bAVM) on 3D rotational angiography (3D-RA) using a semi-automated segmentation algorithm. Data from 3D-RA of 15 patients (8 males, 7 females; 14 supratentorial bAVMs, 1 infratentorial) were used to test the algorithm. Segmentation was performed in two steps: (1) nidus segmentation from propagation (vertical then horizontal) of tagging on the reference slice (i.e., the slice on which the nidus had the biggest surface); (2) contiguity propagation (based on density and variance) from tagging of arteries and veins distant from the nidus. Segmentation quality was evaluated by comparison with six frame/s DSA by two independent reviewers. Analysis of supraselective microcatheterisation was performed to dispel discrepancy. Mean duration for bAVM segmentation was 64 ± 26 min. Quality of segmentation was evaluated as good or fair in 93% of cases. Segmentation had better results than six frame/s DSA for the depiction of a focal ectasia on the main draining vein and for the evaluation of the venous drainage pattern. This segmentation algorithm is a promising tool that may help improve the understanding of bAVM angio-architecture, especially the venous drainage. • The segmentation algorithm allows for the distinction of the AVM's components • This algorithm helps to see the venous drainage of bAVMs more precisely • This algorithm may help to reduce the treatment-related complication rate.

  19. Real-time segmentation of burst suppression patterns in critical care EEG monitoring

    PubMed Central

    Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.

    2014-01-01

    Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828

  20. Real-time segmentation of burst suppression patterns in critical care EEG monitoring.

    PubMed

    Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N

    2013-09-30

    Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  2. An algorithm for calculi segmentation on ureteroscopic images.

    PubMed

    Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme

    2011-03-01

    The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.

  3. An Overview of the Southern Mariana Subduction Factory: Arc, Cross-Chains, and Back-Arc Basin

    NASA Astrophysics Data System (ADS)

    Stern, R. J.; Hargrove, U. S.; Leybourne, M. I.; Pearce, J. A.; Bloomer, S. H.

    2002-12-01

    The Mariana arc system south of 18°N provides 3 opportunities to study the magmatic outputs of the IBM Subduction Factory: 1) Along the Magmatic arc; 2) Across arc cross-chains; and 3) Along the back-arc basin spreading axis. In spite of being located near population centers of Guam and Saipan, this is a relatively poorly known part of the arc system. There is a clear break in the trend and morphology of the magmatic arc west of the144°E fault and slab tear, and we surveyed and sampled the region north and east of this during the Cook 7 expedition in March-April 2001. Systematic morphologic covariations are observed along the arc and backarc basin magmatic systems, with the shallower ridge depths adjacent to more magmatically-robust arc segments. Our preliminary results reveal a compositional discontinuity in back-arc basin basalts (BABB) south of a bathymetric break near 15°30'N, with BABB in shallower segments to the north having a strong subduction component (higher Ba/Nb, Rb, Zr, etc.) and deeper regions to the south being more MORB-like. This is close to the morphological break along the magmatic front, with larger (>10E11 m3) edifices of the Central Island Province north of 16°N and smaller, entirely submarine volcanoes to the south, implying a more robust magmatic budget in the north; a similar variations are observed for cross-chain volcanoes, with smaller ones associated with the smaller, southern arc volcanoes and larger ones associated with the larger arc volcanoes of the Central Island Province. In contrast to the back-arc basin spreading axis, no systematic compositional variations are observed along or across the arc. Arc and cross-chains comprise a coherent, low- to medium-K, dominantly tholeiitic suite. REE patterns show moderate LREE-enrichment, with chondrite-normalized La/Yb = 1.5-2. Rear-arc volcanoes sometimes are slightly less fractionated, slightly more potassic, and slightly more LREE-enriched, but these are second order differences. The strong increase in K and LREE enrichment and decrease in fluid-mobile elements observed for the Kasuga cross-chain to the north is not observed in the southern cross-chains.

  4. Back-arc extension in the Andaman Sea: Tectonic and magmatic processes imaged by high-precision teleseismic double-difference earthquake relocation

    NASA Astrophysics Data System (ADS)

    Diehl, T.; Waldhauser, F.; Cochran, J. R.; Kamesh Raju, K. A.; Seeber, L.; Schaff, D.; Engdahl, E. R.

    2013-05-01

    geometry, kinematics, and mode of back-arc extension along the Andaman Sea plate boundary are refined using a new set of significantly improved hypocenters, global centroid moment tensor (CMT) solutions, and high-resolution bathymetry. By applying cross-correlation and double-difference (DD) algorithms to regional and teleseismic waveforms and arrival times from International Seismological Centre and National Earthquake Information Center bulletins (1964-2009), we resolve the fine-scale structure and spatiotemporal behavior of active faults in the Andaman Sea. The new data reveal that back-arc extension is primarily accommodated at the Andaman Back-Arc Spreading Center (ABSC) at 10°, which hosted three major earthquake swarms in 1984, 2006, and 2009. Short-term spreading rates estimated from extensional moment tensors account for less than 10% of the long-term 3.0-3.8 cm/yr spreading rate, indicating that spreading by intrusion and the formation of new crust make up for the difference. A spatiotemporal analysis of the swarms and Coulomb-stress modeling show that dike intrusions are the primary driver for brittle failure in the ABSC. While spreading direction is close to ridge normal, it is oblique to the adjacent transforms. The resulting component of E-W extension across the transforms is expressed by deep basins on either side of the rift and a change to extensional faulting along the West Andaman fault system after the Mw = 9.2 Sumatra-Andaman earthquake of 2004. A possible skew in slip vectors of earthquakes in the eastern part of the ABSC indicates an en-echelon arrangement of extensional structures, suggesting that the present segment geometry is not in equilibrium with current plate-motion demands, and thus the ridge experiences ongoing re-adjustment.

  5. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    NASA Astrophysics Data System (ADS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  6. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    PubMed

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-21

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  7. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  8. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  9. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  10. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  11. Computation of Matrix Chain Products. Part I, Part II.

    DTIC Science & Technology

    1981-09-01

    vertex V in Corollary 4, not all the n-3 arcs gen - y erated by the algorithm are potential h-arcs. However, it is not difficult to verify that the...degenerated potential h-arc, V -V (V < V.). The upper subpolygon is a fanc i c 1 Fan(wlwd.... I .) and the lower subpolygon is a fan Fan(wI1w2 w,w i,w 3...arc hl, V12 as a degenerated arc h , and V as a degenerated arc h9 . The father-son relationship still holds for the h-arcs in a gen - eral polygon, and

  12. Segmentation of large periapical lesions toward dental computer-aided diagnosis in cone-beam CT scans

    NASA Astrophysics Data System (ADS)

    Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori

    2008-03-01

    This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.

  13. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  14. THE DETECTION AND STATISTICS OF GIANT ARCS BEHIND CLASH CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Bingxiao; Zheng, Wei; Postman, Marc

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ andmore » length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift z{sub s} = 1.9 with 33% of the detected arcs having z{sub s} > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c–M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.« less

  15. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  16. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  17. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  18. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  19. Fore-arc basin deformation in the Andaman-Nicobar segment of the Sumatra-Andaman subduction zone: Insight from high-resolution seismic reflection data

    NASA Astrophysics Data System (ADS)

    Moeremans, Raphaële E.; Singh, Satish C.

    2015-08-01

    The Andaman-Nicobar region is the northernmost segment of the Sumatra-Andaman subduction zone and marks the western boundary of the Andaman Sea, which is a complex active back-arc extensional basin. We present the interpretation of a new set of deep seismic reflection data acquired across the Andaman-Nicobar fore-arc basin, from 8°N to 11°N, in order to better understand its structure and evolution, focusing on (1) how obliquity of convergence affects deformation in the fore arc, (2) the nature and role of the Diligent Fault (DF), and (3) the Eastern Margin Fault (EMF). Despite the obliquity of convergence, back thrusting and compression seem to dominate the Andaman-Nicobar fore-arc basin deformation. The DF is primarily a back thrust and corresponds to the Mentawai and West Andaman Fault systems farther in the south, along Sumatra. The DF is expressed in the fore-arc basin as a series of mostly landward verging folds and faults, deforming the early to late Miocene sediments. The DF seems to root from the boundary between the accretionary complex and the continental backstop, where it meets the EMF. The EMF marks the western boundary of the fore-arc basin; it is associated with subsidence and is expressed as a deep piggyback basin, containing recent Pliocene to Pleistocene sediments. The eastern edge of the fore-arc basin is the Invisible Bank (IB), which is thought to be tilted and uplifted continental crust. Subsidence along the EMF and uplift and tilting of the IB seem to be related to different opening phases in the Andaman Sea.

  20. Late Jurassic - Early Cretaceous convergent margins of Northeastern Asia with Northwestern Pacific and Proto-Arctic oceans

    NASA Astrophysics Data System (ADS)

    Sokolov, Sergey; Luchitskaya, Marina; Tuchkova, Marianna; Moiseev, Artem; Ledneva, Galina

    2013-04-01

    Continental margin of Northeastern Asia includes many island arc terranes that differ in age and tectonic position. Two convergent margins are reconstructed for Late Jurassic - Early Cretaceous time: Uda-Murgal and Alazeya - Oloy island arc systems. A long tectonic zone composed of Upper Jurassic to Lower Cretaceous volcanic and sedimentary rocks is recognized along the Asian continent margin from the Mongol-Okhotsk thrust-fold belt on the south to the Chukotka Peninsula on the north. This belt represents the Uda-Murgal arc, which was developed along the convergent margin between Northeastern Asia and Northwestern Meso-Pacific. Several segments are identified in this arc based upon the volcanic and sedimentary rock assemblages, their respective compositions and basement structures. The southern and central parts of the Uda-Murgal island arc system were a continental margin belt with heterogeneous basement represented by metamorphic rocks of the Siberian craton, the Verkhoyansk terrigenous complex of Siberian passive margin and the Koni-Taigonos late Paleozoic to early Mesozoic island arc with accreted oceanic terranes. At the present day latitude of the Pekulney and Chukotka segments there was an ensimatic island arc with relicts of the South Anyui oceanic basin in backarc basin. Alazeya-Oloy island arc systems consists of Paleozoic and Mesozoic complexes that belong to the convergent margin between Northeastern Asia and Proto-Artic Ocean. It separated structures of the North American and Siberian continents. The Siberian margin was active whereas the North American margin was passive. The Late Jurassic was characterized by termination of a spreading in the Proto-Arctic Ocean and transformation of the latter into the closing South Anyui turbidite basin. In the beginning the oceanic lithosphere and then the Chukotka microcontinent had been subducted beneath the Alazeya-Oloy volcanic belt

  1. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  2. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-03-01

    When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.

  4. Interaction between the Dauki and the Indo-Burman convergence boundaries from teleseismic and locally recorded earthquake data

    NASA Astrophysics Data System (ADS)

    Howe, M.; Moulik, P.; Seeber, L.; Kim, W.; Steckler, M. S.

    2012-12-01

    The Himalayan and the Burma Arcs converge onto the Indian plate from opposite sides near their syntaxial juncture and have reduced it to a sliver. Both geology and seismicity point to recent internal deformation and high seismogenic potential within this sliver. Large historical earthquakes, including the Great Indian earthquake of 1897 (Mw ~8.1), along with the recent seismicity, suggest that the cratonic blocks in the region are bounded by active faults. The most prominent is the E-W trending Dauki Fault, a deeply-rooted, north-dipping thrust fault, situated between the Shillong massif to the north and the Sylhet Basin to the south. Along the Burma Arc, the subducted seismogenic slab of the Indian plate is continuous north to the syntaxis. Yet the Naga and Tripura segments of the accretionary fold belt, respectively north and south of the easterly extrapolation of the Dauki fault, are distinct. Accretion has advanced far westward into the foredeep of the Dauki structure along the front of the Tripura segment, while it has remained stunted facing the uplifted Shillong massif along the Naga segment. Moreover, the Dauki topographic front can be traced eastwards across the Burma Arc separating the two segments. Recent earthquakes support the hypothesis that the Dauki convergence structure continues below the Burma accretionary belt. Using teleseismic and regional data from the deployment of a local network, we explore the interaction of the Dauki thrust fault with the Burma Arc subduction zone. Preliminary observations include: While seismicity is concentrated in the slab at the eastward extrapolation of the Dauki fault, shallow seismicity is diffuse and does not illuminate the Dauki fault itself. P-axes in moment-tensor solutions of earthquakes within the Indian plate tend to be directed N-S and are locally parallel to the India-Burma boundary, particularly in the slab. T-axes tend to be oriented E-W with a strong tendency to follow the slab down dip. This pattern is remarkably consistent, despite the scattered seismicity, and suggests that the stress in the Indian plate, including the subducted oceanic portion of the plate, is still primarily controlled by the Himalayan collision to the north and down-dip pull by the Burma slab. Moment tensor solutions for some of the shallow earthquakes along the fold belt are consistent with geodetic results, showing partitioning of the oblique India-Burma convergence between belt-parallel dextral faults and belt-normal shortening by thrust faults. Relocations of the events using the double-difference algorithm may provide additional constraints on the geometry of the slab. In addition to the analysis of teleseismic data, a network of six seismic stations was also installed in Bangladesh in the region surrounding Sylhet, south of the Shillong Plateau during 2007-2008. Over 200 regional and local events are detected and located by the Sylhet array. About a dozen events are large enough allowing us to determine focal depths and mechanisms that will augment the catalog of the teleseismic events, providing additional insights into the tectonics in the region.

  5. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  6. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation.

    PubMed

    Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap

    2013-04-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.

  7. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  8. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  9. The effects of segmentation algorithms on the measurement of 18F-FDG PET texture parameters in non-small cell lung cancer.

    PubMed

    Bashir, Usman; Azad, Gurdip; Siddique, Muhammad Musib; Dhillon, Saana; Patel, Nikheel; Bassett, Paul; Landau, David; Goh, Vicky; Cook, Gary

    2017-12-01

    Measures of tumour heterogeneity derived from 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) scans are increasingly reported as potential biomarkers of non-small cell lung cancer (NSCLC) for classification and prognostication. Several segmentation algorithms have been used to delineate tumours, but their effects on the reproducibility and predictive and prognostic capability of derived parameters have not been evaluated. The purpose of our study was to retrospectively compare various segmentation algorithms in terms of inter-observer reproducibility and prognostic capability of texture parameters derived from non-small cell lung cancer (NSCLC) 18 F-FDG PET/CT images. Fifty three NSCLC patients (mean age 65.8 years; 31 males) underwent pre-chemoradiotherapy 18 F-FDG PET/CT scans. Three readers segmented tumours using freehand (FH), 40% of maximum intensity threshold (40P), and fuzzy locally adaptive Bayesian (FLAB) algorithms. Intraclass correlation coefficient (ICC) was used to measure the inter-observer variability of the texture features derived by the three segmentation algorithms. Univariate cox regression was used on 12 commonly reported texture features to predict overall survival (OS) for each segmentation algorithm. Model quality was compared across segmentation algorithms using Akaike information criterion (AIC). 40P was the most reproducible algorithm (median ICC 0.9; interquartile range [IQR] 0.85-0.92) compared with FLAB (median ICC 0.83; IQR 0.77-0.86) and FH (median ICC 0.77; IQR 0.7-0.85). On univariate cox regression analysis, 40P found 2 out of 12 variables, i.e. first-order entropy and grey-level co-occurence matrix (GLCM) entropy, to be significantly associated with OS; FH and FLAB found 1, i.e., first-order entropy. For each tested variable, survival models for all three segmentation algorithms were of similar quality, exhibiting comparable AIC values with overlapping 95% CIs. Compared with both FLAB and FH, segmentation with 40P yields superior inter-observer reproducibility of texture features. Survival models generated by all three segmentation algorithms are of at least equivalent utility. Our findings suggest that a segmentation algorithm using a 40% of maximum threshold is acceptable for texture analysis of 18 F-FDG PET in NSCLC.

  10. Lithospheric Contributions to Arc Magmatism: Isotope Variations Along Strike in Volcanoes of Honshu, Japan

    PubMed

    Kersting; Arculus; Gust

    1996-06-07

    Major chemical exchange between the crust and mantle occurs in subduction zone environments, profoundly affecting the chemical evolution of Earth. The relative contributions of the subducting slab, mantle wedge, and arc lithosphere to the generation of island arc magmas, and ultimately new continental crust, are controversial. Isotopic data for lavas from a transect of volcanoes in a single arc segment of northern Honshu, Japan, have distinct variations coincident with changes in crustal lithology. These data imply that the relatively thin crustal lithosphere is an active geochemical filter for all traversing magmas and is responsible for significant modification of primary mantle melts.

  11. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  12. A Novel Arc Fault Detector for Early Detection of Electrical Fires

    PubMed Central

    Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang

    2016-01-01

    Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618

  13. Applicability of NASA (ARC) two-segment approach procedures to Boeing Aircraft

    NASA Technical Reports Server (NTRS)

    Allison, R. L.

    1974-01-01

    An engineering study to determine the feasibility of applying the NASA (ARC) two-segment approach procedures and avionics to the Boeing fleet of commercial jet transports is presented. This feasibility study is concerned with the speed/path control and systems compability aspects of the procedures. Path performance data are provided for representative Boeing 707/727/737/747 passenger models. Thrust margin requirements for speed/path control are analyzed for still air and shearing tailwind conditions. Certification of the two-segment equipment and possible effects on existing airplane certification are discussed. Operational restrictions on use of the procedures with current autothrottles and in icing or reported tailwind conditions are recommended. Using the NASA/UAL 727 procedures as a baseline, maximum upper glide slopes for representative 707/727/737/747 models are defined as a starting point for further study and/or flight evaluation programs.

  14. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  15. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  16. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  17. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.

  18. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  19. Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.

    PubMed

    Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel

    2017-01-01

    This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.

  20. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  1. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  2. Improve threshold segmentation using features extraction to automatic lung delimitation.

    PubMed

    França, Cleunio; Vasconcelos, Germano; Diniz, Paula; Melo, Pedro; Diniz, Jéssica; Novaes, Magdala

    2013-01-01

    With the consolidation of PACS and RIS systems, the development of algorithms for tissue segmentation and diseases detection have intensely evolved in recent years. These algorithms have advanced to improve its accuracy and specificity, however, there is still some way until these algorithms achieved satisfactory error rates and reduced processing time to be used in daily diagnosis. The objective of this study is to propose a algorithm for lung segmentation in x-ray computed tomography images using features extraction, as Centroid and orientation measures, to improve the basic threshold segmentation. As result we found a accuracy of 85.5%.

  3. TOOTHPASTEV6.11.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sankel, David J.; Clair, Aaron B. St.; Langsfield, Joshua D.

    2006-11-01

    Toothpaste is a graphical user interface and Computer Aided Drafting/Manufacturing (CAD/CAM) software package used to plan tool paths for Galil Motion Control hardware. The software is a tool for computer controlled dispensing of materials. The software may be used for solid freeform fabrication of components or the precision printing of inks. Mathematical calculations are used to produce a set of segments and arcs that when coupled together will fill space. The paths of the segments and arcs are then translated into a machine language that controls the motion of motors and translational stages to produce tool paths in three dimensions.more » As motion begins material(s) are dispensed or printed along the three-dimensional pathway.« less

  4. Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-04-01

    When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.

  5. A framework for comparing different image segmentation methods and its use in studying equivalences between level set and fuzzy connectedness frameworks

    PubMed Central

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.

    2011-01-01

    In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014

  6. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  7. Seismicity and plate tectonics in south central Alaska

    NASA Technical Reports Server (NTRS)

    Van Wormer, J. D.; Davies, J.; Gedney, L.

    1974-01-01

    Hypocenter distribution shows that the Benioff zone associated with the Aleutian arc terminates in interior Alaska some 75 km north of the Denali fault. There appears to be a break in the subducting Pacific plate in the Yentna River-Prince William Sound area which separates two seismically independent blocks, similar to the segmented structure reported for the central Aleutian arc.

  8. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  9. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    NASA Astrophysics Data System (ADS)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  10. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  11. Algorithmic structural segmentation of defective particle systems: a lithium-ion battery study.

    PubMed

    Westhoff, D; Finegan, D P; Shearing, P R; Schmidt, V

    2018-04-01

    We describe a segmentation algorithm that is able to identify defects (cracks, holes and breakages) in particle systems. This information is used to segment image data into individual particles, where each particle and its defects are identified accordingly. We apply the method to particle systems that appear in Li-ion battery electrodes. First, the algorithm is validated using simulated data from a stochastic 3D microstructure model, where we have full information about defects. This allows us to quantify the accuracy of the segmentation result. Then we show that the algorithm can successfully be applied to tomographic image data from real battery anodes and cathodes, which are composed of particle systems with very different morpohological properties. Finally, we show how the results of the segmentation algorithm can be used for structural analysis. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  12. Active contour based segmentation of resected livers in CT images

    NASA Astrophysics Data System (ADS)

    Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2015-03-01

    The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.

  13. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  14. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  15. Surface to bulk Fermi arcs via Weyl nodes as topological defects

    PubMed Central

    Kim, Kun Woo; Lee, Woo-Ram; Kim, Yong Baek; Park, Kwon

    2016-01-01

    A hallmark of Weyl semimetal is the existence of surface Fermi arcs. An intriguing question is what determines the connectivity of surface Fermi arcs, when multiple pairs of Weyl nodes are present. To answer this question, we show that the locations of surface Fermi arcs are predominantly determined by the condition that the Zak phase integrated along the normal-to-surface direction is . The Zak phase can reveal the peculiar topological structure of Weyl semimetal directly in the bulk. Here, we show that the winding of the Zak phase around each projected Weyl node manifests itself as a topological defect of the Wannier–Stark ladder, energy eigenstates under an electric field. Remarkably, this leads to bulk Fermi arcs, open-line segments in the bulk spectra. Bulk Fermi arcs should exist in conjunction with surface counterparts to conserve the Weyl fermion number under an electric field, which is supported by explicit numerical evidence. PMID:27845342

  16. Image segmentation on adaptive edge-preserving smoothing

    NASA Astrophysics Data System (ADS)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  17. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  18. Development of a takeoff performance monitoring system. Ph.D. Thesis. Contractor Report, Jan. 1984 - Jun. 1985

    NASA Technical Reports Server (NTRS)

    Srivatsan, Raghavachari; Downing, David R.

    1987-01-01

    Discussed are the development and testing of a real-time takeoff performance monitoring algorithm. The algorithm is made up of two segments: a pretakeoff segment and a real-time segment. One-time imputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data for that takeoff. The real-time segment uses the scheduled performance data generated in the pretakeoff segment, runway length data, and measured parameters to monitor the performance of the airplane throughout the takeoff roll. Airplane and engine performance deficiencies are detected and annunciated. An important feature of this algorithm is the one-time estimation of the runway rolling friction coefficient. The algorithm was tested using a six-degree-of-freedom airplane model in a computer simulation. Results from a series of sensitivity analyses are also included.

  19. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

  20. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    PubMed

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM, because vessels may vary from its tubular-shape in this case. Vessel enhancement algorithms can help to improve the accuracy of the segmentation of the vascular system. However, their contribution to accuracy has to be evaluated as it depends on the specific applications, and in some cases it can lead to a reduction of the overall accuracy. No specific filter was suitable for all tested scenarios. © 2017 American Association of Physicists in Medicine.

  1. A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139

  2. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  3. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  4. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  5. SU-E-J-160: 4D Dynamic Arc of Non-Modulated Variable-Dose-Rate Fields for Lung SBRT: A Feasibility Study.

    PubMed

    Yi, B; Yang, X; Niu, Y; Yu, C

    2012-06-01

    Conformal SBRT plans for Lung cancer with static gantry angles are ideal candidates for applying motion tracking because of: (1) better dosimetric conformity with reduced target margin and (2) easier and more faithful target tracking without intensity modulation. This work is to demonstrate that by delivering the target tracking during gantry rotation, we can significantly improve delivery efficiency without negatively affecting plan quality. A lung SBRT plan with static beams was created using CT images of the reference breathing phase. It is converted to an arc plan with variable dose rate followed by the conversion to a 4D plan with the segment aperture morphing (SAM) method (Gui 2010) with considerations of both target location and shape changes as depicted by the 4D CT. Gantry angle ranges were determined from the clinical monitor units, with the 22.2 MU/degree, which is chosen to maximize the dose rate. All segments of the dynamic 4D plan were merged into a single arc with variable dose rate. Each segment occupying 1/10 of the breathing period delivers 6.6 MUs at a dose rate of 1000 MU/min. Delivery time was measured and compared to the planned. The dose distributions of the single phase 3D plan and the arc 4D plan showed little difference. The delivered time for the 4D arc plan agreed with the calculated time, and is almost the same as delivering the 3D plan without target tracking. A 12 Gy treatment takes less than 2.5 min. The feasibility of a novel 4D delivery method where a 3D SBRT plan is converted into 4D arc delivery has been demonstrated. In addition to realizing the conventional target tracking benefits, our method further improves delivery efficiency, which is important for maintaining the geometric relationship between the target motion and the breathing surrogate during treatment. This study is supported by NIH_Grant_1R01CA133539-01 A2. © 2012 American Association of Physicists in Medicine.

  6. Crustal motion studies in the southwest Pacific: Geodetic measurements of plate convergence in Tonga, Vanuatu and the Solomon Islands

    NASA Astrophysics Data System (ADS)

    Phillips, David A.

    The southwest Pacific is one of the most tectonically dynamic regions on Earth. This research focused on crustal motion studies in three regions of active Pacific-Australia plate convergence in the southwest Pacific: Tonga, the New Hebrides (Vanuatu) and the Solomons Islands. In Tonga, new and refined velocity estimates based on more than a decade of Global Positioning System (GPS) measurements and advanced analysis techniques are much more accurate than previously reported values. Convergence rates of 80 to 165 mm/yr at the Tonga trench represent the fastest plate motions observed on Earth. For the first time, rotation of the Fiji platform relative to the Australian plate is observed, and anomalous deformation of the Tonga ridge was also detected. In the New Hebrides, a combined GPS dataset with a total time series of more than ten years led to new and refined velocity estimates throughout the island arc. Impingement of large bathymetric features has led to arc fragmentation, and four distinct tectonic segments are identified. The central New Hebrides arc segment is being shoved eastward relative to the rest of the arc as convergence is partitioned between the forearc (Australian plate) and the backarc (North Fiji Basin) boundaries due to impingement of the d'Entrecasteaux Ridge and associated Bougainville seamount. The southern New Hebrides arc converges with the Australian plate more rapidly than predicted due to backarc extension. The first measurements of convergence in the northern and southernmost arc segments were also made. In the Solomon Islands, a four-year GPS time series was used to generate the first geodetic estimates of crustal velocity in the New Georgia Group, with 57--84 mm/yr of Australia-Solomon motion and 19--39 mm/yr of Pacific-Solomon motion being observed. These velocities are 20--40% lower than predicted Australia-Pacific velocities. Two-dimensional dislocation models suggest that most of this discrepancy can be attributed to locking of the San Cristobal trench and elastic strain accumulation in the forearc. Anomalous motion at Simbo island is also observed.

  7. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.

  8. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg

  9. Gas measurements from the Costa Rica-Nicaragua volcanic segment suggest possible along-arc variations in volcanic gas chemistry

    NASA Astrophysics Data System (ADS)

    Aiuppa, A.; Robidoux, P.; Tamburello, G.; Conde, V.; Galle, B.; Avard, G.; Bagnato, E.; De Moor, J. M.; Martínez, M.; Muñóz, A.

    2014-12-01

    Obtaining accurate estimates of the CO2 output from arc volcanism requires a precise understanding of the potential along-arc variations in volcanic gas chemistry, and ultimately of the magmatic gas signature of each individual arc segment. In an attempt to more fully constrain the magmatic gas signature of the Central America Volcanic Arc (CAVA), we present here the results of a volcanic gas survey performed during March and April 2013 at five degassing volcanoes within the Costa Rica-Nicaragua volcanic segment (CNVS). Observations of the volcanic gas plume made with a multicomponent gas analyzer system (Multi-GAS) have allowed characterization of the CO2/SO2-ratio signature of the plumes at Poás (0.30±0.06, mean ± SD), Rincón de la Vieja (27.0±15.3), and Turrialba (2.2±0.8) in Costa Rica, and at Telica (3.0±0.9) and San Cristóbal (4.2±1.3) in Nicaragua (all ratios on molar basis). By scaling these plume compositions to simultaneously measured SO2 fluxes, we estimate that the CO2 outputs at CNVS volcanoes range from low (25.5±11.0 tons/day at Poás) to moderate (918 to 1270 tons/day at Turrialba). These results add a new information to the still fragmentary volcanic CO2 output data set, and allow estimating the total CO2 output from the CNVS at 2835±1364 tons/day. Our novel results, with previously available information about gas emissions in Central America, are suggestive of distinct volcanic gas CO2/ST (= SO2 + H2S)-ratio signature for magmatic volatiles in Nicaragua (∼3) relative to Costa Rica (∼0.5-1.0). We also provide additional evidence for the earlier theory relating the CO2-richer signature of Nicaragua volcanism to increased contributions from slab-derived fluids, relative to more-MORB-like volcanism in Costa Rica. The sizeable along-arc variations in magmatic gas chemistry that the present study has suggested indicate that additional gas observations are urgently needed to more-precisely confine the volcanic CO2 from the CAVA, and from global arc volcanism.

  10. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86  ±  0.03 for pediatric patient protocols, and 0.85  ±  0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.

  11. Genetic Algorithm for Solving Fuzzy Shortest Path Problem in a Network with mixed fuzzy arc lengths

    NASA Astrophysics Data System (ADS)

    Mahdavi, Iraj; Tajdin, Ali; Hassanzadeh, Reza; Mahdavi-Amiri, Nezam; Shafieian, Hosna

    2011-06-01

    We are concerned with the design of a model and an algorithm for computing a shortest path in a network having various types of fuzzy arc lengths. First, we develop a new technique for the addition of various fuzzy numbers in a path using α -cuts by proposing a linear least squares model to obtain membership functions for the considered additions. Then, using a recently proposed distance function for comparison of fuzzy numbers. we propose a new approach to solve the fuzzy APSPP using of genetic algorithm. Examples are worked out to illustrate the applicability of the proposed model.

  12. Graph run-length matrices for histopathological image segmentation.

    PubMed

    Tosun, Akif Burak; Gunduz-Demir, Cigdem

    2011-03-01

    The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.

  13. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images

    PubMed Central

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315

  14. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  15. Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images.

    PubMed

    Arslan, Salim; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2013-06-01

    More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms.

  16. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  17. Magmatism and Epithermal Gold-Silver Deposits of the Southern Ancestral Cascade Arc, Western Nevada and Eastern California

    USGS Publications Warehouse

    John, David A.; du Bray, Edward A.; Henry, Christopher D.; Vikre, Peter

    2015-01-01

    Many epithermal gold-silver deposits are temporally and spatially associated with late Oligocene to Pliocene magmatism of the southern ancestral Cascade arc in western Nevada and eastern California. These deposits, which include both quartz-adularia (low- and intermediate-sulfidation; Comstock Lode, Tonopah, Bodie) and quartz-alunite (high-sulfidation; Goldfield, Paradise Peak) types, were major producers of gold and silver. Ancestral Cascade arc magmatism preceded that of the modern High Cascades arc and reflects subduction of the Farallon plate beneath North America. Ancestral arc magmatism began about 45 Ma, continued until about 3 Ma, and extended from near the Canada-United States border in Washington southward to about 250 km southeast of Reno, Nevada. The ancestral arc was split into northern and southern segments across an inferred tear in the subducting slab between Mount Shasta and Lassen Peak in northern California. The southern segment extends between 42°N in northern California and 37°N in western Nevada and was active from about 30 to 3 Ma. It is bounded on the east by the northeast edge of the Walker Lane. Ancestral arc volcanism represents an abrupt change in composition and style of magmatism relative to that in central Nevada. Large volume, caldera-forming, silicic ignimbrites associated with the 37 to 19 Ma ignimbrite flareup are dominant in central Nevada, whereas volcanic centers of the ancestral arc in western Nevada consist of andesitic stratovolcanoes and dacitic to rhyolitic lava domes that mostly formed between 25 and 4 Ma. Both ancestral arc and ignimbrite flareup magmatism resulted from rollback of the shallowly dipping slab that began about 45 Ma in northeast Nevada and migrated south-southwest with time. Most southern segment ancestral arc rocks have oxidized, high potassium, calc-alkaline compositions with silica contents ranging continuously from about 55 to 77 wt%. Most lavas are porphyritic and contain coarse plagioclase ± hornblende, biotite, and pyroxene phenocrysts. Seven epithermal gold-silver deposits with >1 Moz gold production, several large elemental sulfur deposits, and many large areas (10s to >100 km2) of hydrothermally altered rocks are present in the southern ancestral arc, especially south of latitude 40°N. These deposits are principally hosted by intermediate to silicic lava dome complexes; only a few deposits are associated with mafic- to intermediate-composition stratovolcanoes. Large deposits are most abundant and well developed in volcanic fields whose evolution spanned millions of years. Most deposits are hundreds of thousands to several million years younger than their host rocks, although some quartz-alunite deposits are essentially coeval with their host rocks. Variable composition and thickness of crustal basement is the primary control on mineralization along the length of the southern ancestral arc; most deposits and large alteration zones are localized in basement rock terranes with a strong continental affinity, either along the edge of the North American craton (Goldfield, Tonopah) or in an accreted terrane with continental affinities (Walker Lake terrane; Aurora, Bodie, Comstock Lode, Paradise Peak). Epithermal deposits and quartz-alunite alteration zones are scarce to absent in the northern part of the ancestral arc above an accreted island arc (Black Rock terrane) or unknown basement rocks (Modoc Plateau). Walker Lane structures and areas that underwent large magnitude extension during the Late Cenozoic (areas with Oligocene-early Miocene volcanic rocks dipping >40°) do not provide regional control on mineralization. Instead, these features may have served as local-scale conduits for mineralizing fluids.

  18. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  19. Tracing crustal contamination along the Java segment of the Sunda Arc, Indonesia

    NASA Astrophysics Data System (ADS)

    Jolis, E. M.; Troll, V.; Deegan, F.; Blythe, L.; Harris, C.; Freda, C.; Hilton, D.; Chadwick, J.; Van Helden, M.

    2012-04-01

    Arc magmas typically display chemical and petrographic characteristics indicative of crustal input. Crustal contamination can take place either in the mantle source region or as magma traverses the upper crust (e.g. [1]). While source contamination is generally considered the dominant process (e.g. [2]), late-stage crustal contamination has been recognised at volcanic arcs too (e.g. [3]). In light of this, we aim to test the extent of upper crustal versus source contamination along the Java segment of the Sunda arc, which, due its variable upper crustal structure, is an exemplary natural laboratory. We present a detailed geochemical study of 7 volcanoes along a traverse from Anak-Krakatau in the Sunda strait through Java and Bali, to characterise the impact of the overlying crust on arc magma composition. Using rock and mineral elemental geochemistry, radiogenic (Sr, Nd and Pb) and, stable (O) isotopes, we show a correlation between upper crustal composition and the degree of upper crustal contamination. We find an increase in 87Sr/86Sr and δ18O values, and a decrease in 143Nd/144Nd values from Krakatau towards Merapi, indicating substantial crustal input from the thick continental basement present. Volcanoes to the east of Merapi and the Progo-Muria fault transition zone, where the upper crust is thinner, in turn, show considerably less crustal input in their isotopic signatures, indicating a stronger influence of the mantle source. Our new data represent a systematic and high-resolution arc-wide sampling effort that allows us to distinguish the effects of the upper crust on the compositional spectrum of individual volcanic systems along the Sunda arc. [1] Davidson, J.P, Hora, J.M, Garrison, J.M & Dungan, M.A 2005. Crustal Forensics in Arc Magmas. J. Geotherm. Res. 140, 157-170; [2] Debaille, V., Doucelance, R., Weis, D., & Schiano, P. 2005. Geochim. Cosmochim. Acta, 70,723-741; [3] Gasparon, M., Hilton, D.R., & Varne, R. 1994. Earth Planet. Sci. Lett., 126, 15-22.

  20. Users manual for the IMA program

    NASA Technical Reports Server (NTRS)

    Williams, D. F.

    1991-01-01

    The Impulsive Mission Analysis (IMA) computer program provides a user-friendly means of designing a complete Earth-orbital mission profile using an 80386-based microcomputer. The IMA program produces a trajectory summary, an output file for use by the new Simplex Computation of Optimum Orbital Trajectories (SCOOT) program, and several graphics, including ground tracks on a world map, altitude profiles, relative motion plots, and sunlight/communication timelines. The user can design missions using any combination of three basic types of mission segments: double co-eliptic rendezvous, payload delivery, and payload de-orbit/spacecraft recovery. Each mission segment is divided into one or more transfers, and each transfer is divided into one or more legs, each leg consisting of a coast arc followed by a burn arc.

  1. Robust generative asymmetric GMM for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Detection of pseudosinusoidal epileptic seizure segments in the neonatal EEG by cascading a rule-based algorithm with a neural network.

    PubMed

    Karayiannis, Nicolaos B; Mukherjee, Amit; Glover, John R; Ktonas, Periklis Y; Frost, James D; Hrachovy, Richard A; Mizrahi, Eli M

    2006-04-01

    This paper presents an approach to detect epileptic seizure segments in the neonatal electroencephalogram (EEG) by characterizing the spectral features of the EEG waveform using a rule-based algorithm cascaded with a neural network. A rule-based algorithm screens out short segments of pseudosinusoidal EEG patterns as epileptic based on features in the power spectrum. The output of the rule-based algorithm is used to train and compare the performance of conventional feedforward neural networks and quantum neural networks. The results indicate that the trained neural networks, cascaded with the rule-based algorithm, improved the performance of the rule-based algorithm acting by itself. The evaluation of the proposed cascaded scheme for the detection of pseudosinusoidal seizure segments reveals its potential as a building block of the automated seizure detection system under development.

  3. Learning from Demonstration: Generalization via Task Segmentation

    NASA Astrophysics Data System (ADS)

    Ettehadi, N.; Manaffam, S.; Behal, A.

    2017-10-01

    In this paper, a motion segmentation algorithm design is presented with the goal of segmenting a learned trajectory from demonstration such that each segment is locally maximally different from its neighbors. This segmentation is then exploited to appropriately scale (dilate/squeeze and/or rotate) a nominal trajectory learned from a few demonstrations on a fixed experimental setup such that it is applicable to different experimental settings without expanding the dataset and/or retraining the robot. The algorithm is computationally efficient in the sense that it allows facile transition between different environments. Experimental results using the Baxter robotic platform showcase the ability of the algorithm to accurately transfer a feeding task.

  4. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  5. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  6. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  7. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    PubMed

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  8. Multimodal Brain-Tumor Segmentation Based on Dirichlet Process Mixture Model with Anisotropic Diffusion and Markov Random Field Prior

    PubMed Central

    Lu, Yisu; Jiang, Jun; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064

  9. Sedimentation in the central segment of the Aleutian Trench: Sources, transport, and depositional style

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevenson, A.J.; Scholl, D.W.; Vallier, T.L.

    1990-05-01

    The central segment of the Aleutian Trench (162{degree}W to 175{degree}E) is an intraoceanic subduction zone that contains an anomalously thick sedimentary fill (4 km maximum). The fill is an arcward-thickening and slightly tilted wedge of sediment characterized acoustically by laterally continuous, closely spaced, parallel reflectors. These relations are indicative of turbidite deposition. The trench floor and reflection horizons are planar, showing no evidence of an axial channel or any transverse fan bodies. Cores of surface sediment recover turbidite layers, implying that sediment transport and deposition occur via diffuse, sheetlike, fine-grained turbidite flows that occupy the full width of the trench.more » The mineralogy of Holocene trench sediments document a mixture of island-arc (dominant) and continental source terranes. GLORIA side-scan sonar images reveal a westward-flowing axial trench channel that conducts sediment to the eastern margin of the central segment, where channelized flow cases. Much of the sediment transported in this channel is derived from glaciated drainages surrounding the Gulf of Alaska which empty into the eastern trench segment via deep-sea channel systems (Surveyor and others) and submarine canyons (Hinchinbrook and others). Insular sediment transport is more difficult to define. GLORIA images show the efficiency with which the actively growing accretionary wedge impounds sediment that manages to cross a broad fore-arc terrace. It is likely that island-arc sediment reaches the trench either directly via air fall, via recycling of the accretionary prism, or via overtopping of the accretionary ridges by the upper parts of thick turbidite flows.« less

  10. Stabilization and mobility of the head, neck and trunk in horses during overground locomotion: comparisons with humans and other primates

    PubMed Central

    Dunbar, Donald C.; Macpherson, Jane M.; Simmons, Roger W.; Zarcades, Athina

    2009-01-01

    SUMMARY Segmental kinematics were investigated in horses during overground locomotion and compared with published reports on humans and other primates to determine the impact of a large neck on rotational mobility (>20deg.) and stability (≤20deg.) of the head and trunk. Three adult horses (Equus caballus) performing walks, trots and canters were videotaped in lateral view. Data analysis included locomotor velocity, segmental positions, pitch and linear displacements and velocities, and head displacement frequencies. Equine, human and monkey skulls and cervical spines were measured to estimate eye and vestibular arc length during head pitch displacements. Horses stabilized all three segments in all planes during all three gaits, unlike monkeys and humans who make large head pitch and yaw rotations during walks, and monkeys that make large trunk pitch rotations during gallops. Equine head angular displacements and velocities, with some exceptions during walks, were smaller than in humans and other primates. Nevertheless, owing to greater off-axis distances, orbital and vestibular arc lengths remained larger in horses, with the exception of head–neck axial pitch during trots, in which equine arc lengths were smaller than in running humans. Unlike monkeys and humans, equine head peak-frequency ranges fell within the estimated range in which inertia has a compensatory stabilizing effect. This inertial effect was typically over-ridden, however, by muscular or ligamentous intervention. Thus, equine head pitch was not consistently compensatory, as reported in humans. The equine neck isolated the head from the trunk enabling both segments to provide a spatial reference frame. PMID:19043061

  11. Stabilization and mobility of the head, neck and trunk in horses during overground locomotion: comparisons with humans and other primates.

    PubMed

    Dunbar, Donald C; Macpherson, Jane M; Simmons, Roger W; Zarcades, Athina

    2008-12-01

    Segmental kinematics were investigated in horses during overground locomotion and compared with published reports on humans and other primates to determine the impact of a large neck on rotational mobility (> 20 deg.) and stability (< or = 20 deg.) of the head and trunk. Three adult horses (Equus caballus) performing walks, trots and canters were videotaped in lateral view. Data analysis included locomotor velocity, segmental positions, pitch and linear displacements and velocities, and head displacement frequencies. Equine, human and monkey skulls and cervical spines were measured to estimate eye and vestibular arc length during head pitch displacements. Horses stabilized all three segments in all planes during all three gaits, unlike monkeys and humans who make large head pitch and yaw rotations during walks, and monkeys that make large trunk pitch rotations during gallops. Equine head angular displacements and velocities, with some exceptions during walks, were smaller than in humans and other primates. Nevertheless, owing to greater off-axis distances, orbital and vestibular arc lengths remained larger in horses, with the exception of head-neck axial pitch during trots, in which equine arc lengths were smaller than in running humans. Unlike monkeys and humans, equine head peak-frequency ranges fell within the estimated range in which inertia has a compensatory stabilizing effect. This inertial effect was typically over-ridden, however, by muscular or ligamentous intervention. Thus, equine head pitch was not consistently compensatory, as reported in humans. The equine neck isolated the head from the trunk enabling both segments to provide a spatial reference frame.

  12. Minimum-fuel turning climbout and descent guidance of transport jets

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Kreindler, E.

    1983-01-01

    The complete flightpath optimization problem for minimum fuel consumption from takeoff to landing including the initial and final turns from and to the runway heading is solved. However, only the initial and final segments which contain the turns are treated, since the straight-line climbout, cruise, and descent problems have already been solved. The paths are derived by generating fields of extremals, using the necessary conditions of optimal control together with singular arcs and state constraints. Results show that the speed profiles for straight flight and turning flight are essentially identical except for the final horizontal accelerating or decelerating turns. The optimal turns require no abrupt maneuvers, and an approximation of the optimal turns could be easily integrated with present straight-line climb-cruise-descent fuel-optimization algorithms. Climbout at the optimal IAS rather than the 250-knot terminal-area speed limit would save 36 lb of fuel for the 727-100 aircraft.

  13. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  14. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  15. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  16. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  17. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  18. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, J; Lin, H; Chow, J

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensitymore » of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less

  19. Conception et analyse d'un systeme d'optimisation de plans de vol pour les avions

    NASA Astrophysics Data System (ADS)

    Maazoun, Wissem

    The main objective of this thesis is to develop an optimization method for the preparation of flight plans for aircrafts. The flight plan minimizes all costs associated with the flight. We determine an optimal path for an airplane from a departure airport to a destination airport. The optimal path minimizes the sum of all costs, i.e. the cost of fuel added to the cost of time (wages, rental of the aircraft, arrival delays, etc.). The optimal trajectory is obtained by considering all possible trajectories on a 3D graph (longitude, latitude and altitude) where the altitude levels are separated by 2,000 feet, and by applying a shortest path algorithm. The main task was to accurately compute fuel consumption on each edge of the graph, making sure that each arc has a minimal cost and is covered in a realistic way from the point of view of control, i.e. in accordance with the rules of navigation. To compute the cost of an arc, we take into account weather conditions (temperature, pressure, wind components, etc.). The optimization of each arc is done via the evaluation of an optimum speed that takes all costs into account. Each arc of the graph typically includes several sub-phases of the flight, e.g. altitude change, speed change, and constant speed and altitude. In the initial climb and the final descent phases, the costs are determined by considering altitude changes at constant CAS (Calibrated Air Speed) or constant Mach number. CAS and Mach number are adjusted to minimize cost. The aerodynamic model used is the one proposed by Eurocontrol, which uses the BADA (Base of Aircraft Data) tables. This model is based on the total energy equation that determines the instantaneous fuel consumption. Calculations on each arc are done by solving a system of differential equations that systematically takes all costs into account. To compute the cost of an arc, we must know the time to go through it, which is generally unknown. To have well-posed boundary conditions, we use the horizontal displacement as the independent variable of the system of differential equations. We consider the velocity components of the wind in a 3D system of coordinates to compute the instantaneous ground speed of the aircraft. To consider the cost of time, we use the cost index. The cost of an arc depends on the aircraft mass at the beginning of this arc, and this mass depends on the path. As we consider all possible paths, the cost of an arc must be computed for each trajectory to which it belongs. For a long-distance flight, the number of arcs to be considered in the graph is large and therefore the cost of an arc is typically computed many times. Our algorithm computes the costs of one million arcs in seconds while having a high accuracy. The determination of the optimal trajectory can therefore be done in a short time. To get the optimal path, the mass of the aircraft at the departure point must also be optimal. It is therefore necessary to know the optimal amount of fuel for the journey. The aircraft mass is known only at the arrival point. This mass is the mass of the aircraft including passengers, cargo and reserve fuel mass. The optimal path is determined by calculating backwards, i.e. from the arrival point to the departure point. For the determination of the optimal trajectory, we use an elliptical grid that has focal points at the departure and arrival points. The use of this grid is essential for the construction of a direct and acyclic graph. We use the Bellman-Ford algorithm on a DAG to determine the shortest path. This algorithm is easy to implement and results in short computation times. Our algorithm computes an optimal trajectory with an optimal cost for each arc. Altitude changes are done optimally with respect to the mass of the aircraft and the cost of time. Our algorithm gives the mass, speed, altitude and total cost at any point of the trajectory as well as the optimal profiles of climb and descent. A prototype has been implemented in C. We made simulations of all types of possible arcs and of several complete trajectories to illustrate the behaviour of the algorithm.

  20. An artifacts removal post-processing for epiphyseal region-of-interest (EROI) localization in automated bone age assessment (BAA)

    PubMed Central

    2011-01-01

    Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080

  1. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-02

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.

  2. Recent Progress in Adjustable X-ray Optics for Astronomy

    NASA Technical Reports Server (NTRS)

    Reid, Paul B.; Allured, Ryan; Cotroneo, Vincenzo; McMuldroch, Stuart; Marquez, Vanessa; Schwartz, Daniel A.; Vikhlinin, Alexey; ODell, Stephen L.; Ramsey, Brian; Trolier-McKinstry, Susan; hide

    2014-01-01

    Two adjustable X-ray optics approaches are being developed for thin grazing incidence optics for astronomy. The first approach employs thin film piezoelectric material sputter deposited as a continuous layer on the back of thin, lightweight Wolter-I mirror segments. The piezoelectric material is used to correct mirror figure errors from fabrication, mounting/alignment, and any ground to orbit changes. The goal of this technology is to produce Wolter mirror segment pairs corrected to 0.5 arc sec image resolution. With the combination of high angular resolution and lightweight, this mirror technology is suitable for the Square Meter Arc Second Resolution Telescope for X-rays (SMART-X) mission concept.. The second approach makes use of electrostrictive adjusters and full shell nickel/cobalt electroplated replication mirrors. An array of radial adjusters is used to deform the full shells to correct the lowest order axial and azimuthal errors, improving imaging performance from the 10 - 15 arc sec level to 5 arc sec. We report on recent developments in both technologies. In particular, we discuss the use of insitu strain gauges on the thin piezo film mirrors for use as feedback on piezoelectric adjuster functionality, including their use for on-orbit figure correction. We also report on the first tests of full shell nickel/cobalt mirror correction with radial adjusters.

  3. Pseudogap-generated a coexistence of Fermi arcs and Fermi pockets in cuprate superconductors

    NASA Astrophysics Data System (ADS)

    Zhao, Huaisong; Gao, Deheng; Feng, Shiping

    2017-03-01

    One of the most intriguing puzzle is why there is a coexistence of Fermi arcs and Fermi pockets in the pseudogap phase of cuprate superconductors? This puzzle is calling for an explanation. Based on the t - J model in the fermion-spin representation, the coexistence of the Fermi arcs and Fermi pockets in cuprate superconductors is studied by taking into account the pseudogap effect. It is shown that the pseudogap induces an energy band splitting, and then the poles of the electron Green's function at zero energy form two contours in momentum space, however, the electron spectral weight on these two contours around the antinodal region is gapped out by the pseudogap, leaving behind the low-energy electron spectral weight only located at the disconnected segments around the nodal region. In particular, the tips of these disconnected segments converge on the hot spots to form the closed Fermi pockets, generating a coexistence of the Fermi arcs and Fermi pockets. Moreover, the single-particle coherent weight is directly related to the pseudogap, and grows linearly with doping. The calculated result of the overall dispersion of the electron excitations is in qualitative agreement with the experimental data. The theory also predicts that the pseudogap-induced peak-dip-hump structure in the electron spectrum is absent from the hot-spot directions.

  4. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  5. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  6. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  7. Automatic layer segmentation of H&E microscopic images of mice skin

    NASA Astrophysics Data System (ADS)

    Hussein, Saif; Selway, Joanne; Jassim, Sabah; Al-Assam, Hisham

    2016-05-01

    Mammalian skin is a complex organ composed of a variety of cells and tissue types. The automatic detection and quantification of changes in skin structures has a wide range of applications for biological research. To accurately segment and quantify nuclei, sebaceous gland, hair follicles, and other skin structures, there is a need for a reliable segmentation of different skin layers. This paper presents an efficient segmentation algorithm to segment the three main layers of mice skin, namely epidermis, dermis, and subcutaneous layers. It also segments the epidermis layer into two sub layers, basal and cornified layers. The proposed algorithm uses adaptive colour deconvolution technique on H&E stain images to separate different tissue structures, inter-modes and Otsu thresholding techniques were effectively combined to segment the layers. It then uses a set of morphological and logical operations on each layer to removing unwanted objects. A dataset of 7000 H&E microscopic images of mutant and wild type mice were used to evaluate the effectiveness of the algorithm. Experimental results examined by domain experts have confirmed the viability of the proposed algorithms.

  8. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polan, D; Brady, S; Kaufman, R

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using anmore » anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT protocol parameters with an average DSC of 0.86 ± 0.04 (range: 0.80–0.99).« less

  9. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE

    PubMed Central

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379

  10. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE.

    PubMed

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.

  11. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  12. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  13. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  14. Design, Construction, and Testing of Lightweight X-ray Mirror Modules

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; Biskach, Michael P.; Chan, Kai-Wing; Espina, Rebecca A.; Hohl, Bruce R.; Matson, Elizabeth A.; Saha, Timo C.; Zhang, William W.

    2013-01-01

    Lightweight and high resolution optics are needed for future space-based X-ray telescopes to achieve advances in high-energy astrophysics. The Next Generation X-ray Optics (NGXO) team at NASA GSFC is nearing mission readiness for a 10 arc-second Half Power Diameter (HPD) slumped glass mirror technology while laying the groundwork for a future 1-2 arc-second technology based on polished silicon mirrors. Technology Development Modules (TDMs) have been designed, fabricated, integrated with mirrors segments, and extensively tested to demonstrate technology readiness. Tests include X-ray performance, thermal vacuum, acoustic load, and random vibration. The thermal vacuum and acoustic load environments have proven relatively benign, while the random vibration environment has proven challenging due to large input amplification at frequencies above 500 Hz. Epoxy selection, surface preparation, and larger bond area have increased bond strength while vibration isolation has decreased vibration amplification allowing for space launch requirements to be met in the near term. The next generation of TDMs, which demonstrates a lightweight structure supporting more mirror segments, is currently being fabricated. Analysis predicts superior performance characteristics due to the use of E-60 Beryllium-Oxide Metal Matrix Composite material, with only a modest cost increase. These TDMs will be larger, lighter, stiffer, and stronger than the current generation. Preliminary steps are being taken to enable mounting and testing of 1-2 arc-second mirror segments expected to be available in the future. A Vertical X-ray Test Facility (VXTF) will minimize module gravity distortion and allow for less constrained mirror mounts, such as fully kinematic mounts. Permanent kinematic mounting into a modified TDM has been demonstrated to achieve 2 arc-second level distortion free alignment.

  15. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  16. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  17. Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT

    NASA Astrophysics Data System (ADS)

    Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.

    2012-02-01

    For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.

  18. Three dimensional (3D) microstructure-based finite element modeling of Al-SiC nanolaminates using focused ion beam (FIB) tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Carl R.

    Al-SiC nanolaminate composites show promise as high performance coating materials due to their combination of strength and toughness. Although a significant amount of modeling effort has been focused on materials with an idealized flat nanostructure, experimentally these materials exhibit complex undulating layer geometries. This work utilizes FIB tomography to characterize this nanostructure in 3D and finite element modeling to determine the effect that this complex structure has on the mechanical behavior of these materials. A sufficiently large volume was characterized such that a 1 × 2 μm micropillar could be generated from the dataset and compared directly to experimental results.more » The mechanical response from this nanostructure was then compared to pillar models using simplified structures with perfectly flat layers, layers with sinusoidal waviness, and layers with arc segment waviness. The arc segment based layer geometry showed the best agreement with the experimentally determined structure, indicating it would be the most appropriate geometry for future modeling efforts. - Highlights: •FIB tomography was used to determine the structure of an Al-SiC nanolaminate in 3D. •FEM was used to compare the deformation of the nanostructure to experimental results. •Idealized structures from literature were compared to the FIB determined structure. •Arc segment based structures approximated the FIB determined structure most closely.« less

  19. A Microcomputer-Based Network Optimization Package.

    DTIC Science & Technology

    1981-09-01

    from either cases a or c as Truncated-Newton directions. It can be shown [Ref. 27] that the TNCG algorithm is globally convergent and capable of...nonzero values of LGB indicate bounds at which arcs are fixed or reversed. Fixed arcs have negative T ( ) while free arcs have positive T ( ) values...Solution of Generalized Network Problems," Working Paper, Department of Finance and Business Economics , School of Business , University of Southern

  20. Parallel fuzzy connected image segmentation on GPU

    PubMed Central

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037

  1. Parallel fuzzy connected image segmentation on GPU.

    PubMed

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  2. Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks.

    PubMed

    Demirhan, Ayşe; Toru, Mustafa; Guler, Inan

    2015-07-01

    Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.

  3. Automatic segmentation of psoriasis lesions

    NASA Astrophysics Data System (ADS)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  4. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  5. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  6. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  7. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel

    NASA Astrophysics Data System (ADS)

    Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.

    2018-03-01

    This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.

  9. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  10. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  11. Consolidating NASA's Arc Jets

    NASA Technical Reports Server (NTRS)

    Balboni, John A.; Gokcen, Tahir; Hui, Frank C. L.; Graube, Peter; Morrissey, Patricia; Lewis, Ronald

    2015-01-01

    The paper describes the consolidation of NASA's high powered arc-jet testing at a single location. The existing plasma arc-jet wind tunnels located at the Johnson Space Center were relocated to Ames Research Center while maintaining NASA's technical capability to ground-test thermal protection system materials under simulated atmospheric entry convective heating. The testing conditions at JSC were reproduced and successfully demonstrated at ARC through close collaboration between the two centers. New equipment was installed at Ames to provide test gases of pure nitrogen mixed with pure oxygen, and for future nitrogen-carbon dioxide mixtures. A new control system was custom designed, installed and tested. Tests demonstrated the capability of the 10 MW constricted-segmented arc heater at Ames meets the requirements of the major customer, NASA's Orion program. Solutions from an advanced computational fluid dynamics code were used to aid in characterizing the properties of the plasma stream and the surface environment on the calorimeters in the supersonic flow stream produced by the arc heater.

  12. A bounding-based solution approach for the continuous arc covering problem

    NASA Astrophysics Data System (ADS)

    Wei, Ran; Murray, Alan T.; Batta, Rajan

    2014-04-01

    Road segments, telecommunication wiring, water and sewer pipelines, canals and the like are important features of the urban environment. They are often conceived of and represented as network-based arcs. As a result of the usefulness and significance of arc-based features, there is a need to site facilities along arcs to serve demand. Examples of such facilities include surveillance equipment, cellular towers, refueling centers and emergency response stations, with the intent of being economically efficient as well as providing good service along the arcs. While this amounts to a continuous location problem by nature, various discretizations are generally relied upon to solve such problems. The result is potential for representation errors that negatively impact analysis and decision making. This paper develops a solution approach for the continuous arc covering problem that theoretically eliminates representation errors. The developed approach is applied to optimally place acoustic sensors and cellular base stations along a road network. The results demonstrate the effectiveness of this approach for ameliorating any error and uncertainty in the modeling process.

  13. MAGENCO: A map generalization controller for Arc/Info

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.; Cashwell, J.W.

    The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less

  14. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  15. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  16. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    NASA Astrophysics Data System (ADS)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  17. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    PubMed

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.

  18. Control of arc length during gas metal arc welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madigan, R.B.; Quinn, T.P.

    1994-12-31

    An arc-length control system has been developed for gas metal arc welding (GMAW) under spray transfer welding conditions. The ability to monitor and control arc length during arc welding allows consistent weld characteristics to be maintained and therefore improves weld quality. Arc length control has only been implemented for gas tungsten arc welding (GTAW), where an automatic voltage control (AVC) unit adjusts torch-to-work distance. The system developed here compliments the voltage- and current-sensing techniques commonly used for control of GMAW. The system consists of an arc light intensity sensor (photodiode), a Hall-effect current sensor, a personal computer and software implementingmore » a data interpretation and control algorithms. Arc length was measured using both arc light and arc current signals. Welding current was adjusted to maintain constant arc length. A proportional-integral-derivative (PID) controller was used. Gains were automatically selected based on the desired welding conditions. In performance evaluation welds, arc length varied from 2.5 to 6.5 mm while welding up a sloped workpiece (ramp in CTWD) without the control. Arc length was maintained within 1 mm of the desired (5 mm ) with the control.« less

  19. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  20. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  1. Customizing Extensor Reconstruction in Vascularized Toe Joint Transfers to Finger Proximal Interphalangeal Joints: A Strategic Approach for Correcting Extensor Lag.

    PubMed

    Loh, Charles Yuen Yung; Hsu, Chung-Chen; Lin, Cheng-Hung; Chen, Shih-Heng; Lien, Shwu-Huei; Lin, Chih-Hung; Wei, Fu-Chan; Lin, Yu-Te

    2017-04-01

    Vascularized toe proximal interphalangeal joint transfer allows the restoration of damaged joints. However, extensor lag and poor arc of motion have been reported. The authors present their outcomes of treatment according to a novel reconstructive algorithm that addresses extensor lag and allows for consistent results postoperatively. Vascularized toe joint transfers were performed in a consecutive series of 26 digits in 25 patients. The average age was 30.5 years, with 14 right and 12 left hands. Reconstructed digits included eight index, 10 middle, and eight ring fingers. Simultaneous extensor reconstructions were performed and eight were centralization of lateral bands, five were direct extensor digitorum longus-to-extensor digitorum communis repairs, and 13 were central slip reconstructions. The average length of follow-up was 16.7 months. The average extension lag was 17.9 degrees. The arc of motion was 57.7 degrees (81.7 percent functional use of pretransfer toe proximal interphalangeal joint arc of motion). There was no significant difference in the reconstructed proximal interphalangeal joint arc of motion for the handedness (p = 0.23), recipient digits (p = 0.37), or surgical experience in vascularized toe joint transfer (p = 0.25). The outcomes of different techniques of extensor mechanism reconstruction were similar in terms of extensor lag, arc of motion, and reconstructed finger arc of motion compared with the pretransfer toe proximal interphalangeal joint arc of motion. With this treatment algorithm, consistent outcomes can be produced with minimal extensor lag and maximum use of potential toe proximal interphalangeal joint arc of motion. Therapeutic, IV.

  2. MRI brain tumor segmentation based on improved fuzzy c-means method

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo

    2009-10-01

    This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.

  3. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    PubMed

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  4. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    PubMed Central

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  5. Automated and real-time segmentation of suspicious breast masses using convolutional neural network

    PubMed Central

    Gregory, Adriana; Denis, Max; Meixner, Duane D.; Bayat, Mahdi; Whaley, Dana H.; Fatemi, Mostafa; Alizad, Azra

    2018-01-01

    In this work, a computer-aided tool for detection was developed to segment breast masses from clinical ultrasound (US) scans. The underlying Multi U-net algorithm is based on convolutional neural networks. Under the Mayo Clinic Institutional Review Board protocol, a prospective study of the automatic segmentation of suspicious breast masses was performed. The cohort consisted of 258 female patients who were clinically identified with suspicious breast masses and underwent clinical US scan and breast biopsy. The computer-aided detection tool effectively segmented the breast masses, achieving a mean Dice coefficient of 0.82, a true positive fraction (TPF) of 0.84, and a false positive fraction (FPF) of 0.01. By avoiding positioning of an initial seed, the algorithm is able to segment images in real time (13–55 ms per image), and can have potential clinical applications. The algorithm is at par with a conventional seeded algorithm, which had a mean Dice coefficient of 0.84 and performs significantly better (P< 0.0001) than the original U-net algorithm. PMID:29768415

  6. Lung partitioning for x-ray CAD applications

    NASA Astrophysics Data System (ADS)

    Annangi, Pavan; Raja, Anand

    2011-03-01

    Partitioning the inside region of lung into homogeneous regions becomes a crucial step in any computer-aided diagnosis applications based on chest X-ray. The ribs, air pockets and clavicle occupy major space inside the lung as seen in the chest x-ray PA image. Segmenting the ribs and clavicle to partition the lung into homogeneous regions forms a crucial step in any CAD application to better classify abnormalities. In this paper we present two separate algorithms to segment ribs and the clavicle bone in a completely automated way. The posterior ribs are segmented based on Phase congruency features and the clavicle is segmented using Mean curvature features followed by Radon transform. Both the algorithms work on the premise that the presentation of each of these anatomical structures inside the left and right lung has a specific orientation range within which they are confined to. The search space for both the algorithms is limited to the region inside the lung, which is obtained by an automated lung segmentation algorithm that was previously developed in our group. Both the algorithms were tested on 100 images of normal and patients affected with Pneumoconiosis.

  7. Combinatorial mutagenesis of the voltage-sensing domain enables the optical resolution of action potentials firing at 60 Hz by a genetically encoded fluorescent sensor of membrane potential.

    PubMed

    Piao, Hong Hua; Rajakumar, Dhanarajan; Kang, Bok Eum; Kim, Eun Ha; Baker, Bradley J

    2015-01-07

    ArcLight is a genetically encoded fluorescent voltage sensor using the voltage-sensing domain of the voltage-sensing phosphatase from Ciona intestinalis that gives a large but slow-responding optical signal in response to changes in membrane potential (Jin et al., 2012). Fluorescent voltage sensors using the voltage-sensing domain from other species give faster yet weaker optical signals (Baker et al., 2012; Han et al., 2013). Sequence alignment of voltage-sensing phosphatases from different species revealed conserved polar and charged residues at 7 aa intervals in the S1-S3 transmembrane segments of the voltage-sensing domain, suggesting potential coil-coil interactions. The contribution of these residues to the voltage-induced optical signal was tested using a cassette mutagenesis screen by flanking each transmembrane segment with unique restriction sites to allow for the testing of individual mutations in each transmembrane segment, as well as combinations in all four transmembrane segments. Addition of a counter charge in S2 improved the kinetics of the optical response. A double mutation in the S4 domain dramatically reduced the slow component of the optical signal seen in ArcLight. Combining that double S4 mutant with the mutation in the S2 domain yielded a probe with kinetics <10 ms. Optimization of the linker sequence between S4 and the fluorescent protein resulted in a new ArcLight-derived probe, Bongwoori, capable of resolving action potentials in a hippocampal neuron firing at 60 Hz. Additional manipulation of the voltage-sensing domain could potentially lead to fluorescent sensors capable of optically resolving neuronal inhibition and subthreshold synaptic activity. Copyright © 2015 the authors 0270-6474/15/350372-15$15.00/0.

  8. Pulmonary Lobe Segmentation with Probabilistic Segmentation of the Fissures and a Groupwise Fissure Prior

    PubMed Central

    Bragman, Felix J.S.; McClelland, Jamie R.; Jacob, Joseph; Hurst, John R.; Hawkes, David J.

    2017-01-01

    A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0.90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0.884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures. PMID:28436850

  9. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  10. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, E; Li, X; Moreau, M

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on themore » geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.« less

  11. Shape-driven 3D segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2006-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.

  12. Three-dimensional lung tumor segmentation from x-ray computed tomography using sparse field active models.

    PubMed

    Awad, Joseph; Owrangi, Amir; Villemaire, Lauren; O'Riordan, Elaine; Parraga, Grace; Fenster, Aaron

    2012-02-01

    Manual segmentation of lung tumors is observer dependent and time-consuming but an important component of radiology and radiation oncology workflow. The objective of this study was to generate an automated lung tumor measurement tool for segmentation of pulmonary metastatic tumors from x-ray computed tomography (CT) images to improve reproducibility and decrease the time required to segment tumor boundaries. The authors developed an automated lung tumor segmentation algorithm for volumetric image analysis of chest CT images using shape constrained Otsu multithresholding (SCOMT) and sparse field active surface (SFAS) algorithms. The observer was required to select the tumor center and the SCOMT algorithm subsequently created an initial surface that was deformed using level set SFAS to minimize the total energy consisting of mean separation, edge, partial volume, rolling, distribution, background, shape, volume, smoothness, and curvature energies. The proposed segmentation algorithm was compared to manual segmentation whereby 21 tumors were evaluated using one-dimensional (1D) response evaluation criteria in solid tumors (RECIST), two-dimensional (2D) World Health Organization (WHO), and 3D volume measurements. Linear regression goodness-of-fit measures (r(2) = 0.63, p < 0.0001; r(2) = 0.87, p < 0.0001; and r(2) = 0.96, p < 0.0001), and Pearson correlation coefficients (r = 0.79, p < 0.0001; r = 0.93, p < 0.0001; and r = 0.98, p < 0.0001) for 1D, 2D, and 3D measurements, respectively, showed significant correlations between manual and algorithm results. Intra-observer intraclass correlation coefficients (ICC) demonstrated high reproducibility for algorithm (0.989-0.995, 0.996-0.997, and 0.999-0.999) and manual measurements (0.975-0.993, 0.985-0.993, and 0.980-0.992) for 1D, 2D, and 3D measurements, respectively. The intra-observer coefficient of variation (CV%) was low for algorithm (3.09%-4.67%, 4.85%-5.84%, and 5.65%-5.88%) and manual observers (4.20%-6.61%, 8.14%-9.57%, and 14.57%-21.61%) for 1D, 2D, and 3D measurements, respectively. The authors developed an automated segmentation algorithm requiring only that the operator select the tumor to measure pulmonary metastatic tumors in 1D, 2D, and 3D. Algorithm and manual measurements were significantly correlated. Since the algorithm segmentation involves selection of a single seed point, it resulted in reduced intra-observer variability and decreased time, for making the measurements.

  13. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  14. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    NASA Astrophysics Data System (ADS)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  15. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  16. Fuzzy C-Means Algorithm for Segmentation of Aerial Photography Data Obtained Using Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Akinin, M. V.; Akinina, N. V.; Klochkov, A. Y.; Nikiforov, M. B.; Sokolova, A. V.

    2015-05-01

    The report reviewed the algorithm fuzzy c-means, performs image segmentation, give an estimate of the quality of his work on the criterion of Xie-Beni, contain the results of experimental studies of the algorithm in the context of solving the problem of drawing up detailed two-dimensional maps with the use of unmanned aerial vehicles. According to the results of the experiment concluded that the possibility of applying the algorithm in problems of decoding images obtained as a result of aerial photography. The considered algorithm can significantly break the original image into a plurality of segments (clusters) in a relatively short period of time, which is achieved by modification of the original k-means algorithm to work in a fuzzy task.

  17. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  18. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  19. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  20. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  1. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  2. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  3. Oligocene and Miocene arc volcanism in northeastern California: evidence for post-Eocene segmentation of the subducting Farallon plate

    USGS Publications Warehouse

    Colgan, J.P.; Egger, A.E.; John, D.A.; Cousens, B.; Fleck, R.J.; Henry, C.D.

    2011-01-01

    The Warner Range in northeastern California exposes a section of Tertiary rocks over 3 km thick, offering a unique opportunity to study the long-term history of Cascade arc volcanism in an area otherwise covered by younger volcanic rocks. The oldest locally sourced volcanic rocks in the Warner Range are Oligocene (28–24 Ma) and include a sequence of basalt and basaltic andesite lava flows overlain by hornblende and pyroxene andesite pyroclastic flows and minor lava flows. Both sequences vary in thickness (0–2 km) along strike and are inferred to be the erosional remnants of one or more large, partly overlapping composite volcanoes. No volcanic rocks were erupted in the Warner Range between ca. 24 and 16 Ma, although minor distally sourced silicic tuffs were deposited during this time. Arc volcanism resumed ca. 16 Ma with eruption of basalt and basaltic andesite lavas sourced from eruptive centers 5–10 km south of the relict Oligocene centers. Post–16 Ma arc volcanism continued until ca. 8 Ma, forming numerous eroded but well-preserved shield volcanoes to the south of the Warner Range. Oligocene to Late Miocene volcanic rocks in and around the Warner Range are calc-alkaline basalts to andesites (48%–61% SiO2) that display negative Ti, Nb, and Ta anomalies in trace element spider diagrams, consistent with an arc setting. Middle Miocene lavas in the Warner Range are distinctly different in age, composition, and eruptive style from the nearby Steens Basalt, with which they were previously correlated. Middle to Late Miocene shield volcanoes south of the Warner Range consist of homogeneous basaltic andesites (53%–57% SiO2) that are compositionally similar to Oligocene rocks in the Warner Range. They are distinctly different from younger (Late Miocene to Pliocene) high-Al, low-K olivine tholeiites, which are more mafic (46%–49% SiO2), did not build large edifices, and are thought to be related to backarc extension. The Warner Range is ∼100 km east of the axis of the modern arc in northeastern California, suggesting that the Cascade arc south of modern Mount Shasta migrated west during the Late Miocene and Pliocene, while the arc north of Mount Shasta remained in essentially the same position. We interpret these patterns as evidence for an Eocene to Miocene tear in the subducting slab, with a more steeply dipping plate segment to the north, and an initially more gently dipping segment to the south that gradually steepened from the Middle Miocene to the present.

  4. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  5. Optical Coherence Tomography (OCT) Device Independent Intraretinal Layer Segmentation

    PubMed Central

    Ehnes, Alexander; Wenner, Yaroslava; Friedburg, Christoph; Preising, Markus N.; Bowl, Wadim; Sekundo, Walter; zu Bexten, Erdmuthe Meyer; Stieger, Knut; Lorenz, Birgit

    2014-01-01

    Purpose To develop and test an algorithm to segment intraretinal layers irrespectively of the actual Optical Coherence Tomography (OCT) device used. Methods The developed algorithm is based on the graph theory optimization. The algorithm's performance was evaluated against that of three expert graders for unsigned boundary position difference and thickness measurement of a retinal layer group in 50 and 41 B-scans, respectively. Reproducibility of the algorithm was tested in 30 C-scans of 10 healthy subjects each with the Spectralis and the Stratus OCT. Comparability between different devices was evaluated in 84 C-scans (volume or radial scans) obtained from 21 healthy subjects, two scans per subject with the Spectralis OCT, and one scan per subject each with the Stratus OCT and the RTVue-100 OCT. Each C-scan was segmented and the mean thickness for each retinal layer in sections of the early treatment of diabetic retinopathy study (ETDRS) grid was measured. Results The algorithm was able to segment up to 11 intraretinal layers. Measurements with the algorithm were within the 95% confidence interval of a single grader and the difference was smaller than the interindividual difference between the expert graders themselves. The cross-device examination of ETDRS-grid related layer thicknesses highly agreed between the three OCT devices. The algorithm correctly segmented a C-scan of a patient with X-linked retinitis pigmentosa. Conclusions The segmentation software provides device-independent, reliable, and reproducible analysis of intraretinal layers, similar to what is obtained from expert graders. Translational Relevance Potential application of the software includes routine clinical practice and multicenter clinical trials. PMID:24820053

  6. A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems.

    PubMed

    Mao, Yingchi; Zhong, Haishi; Xiao, Xianjian; Li, Xiaofang

    2017-03-06

    With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment-based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms.

  7. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  8. An Algorithm to Automate Yeast Segmentation and Tracking

    PubMed Central

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  9. Plate boundary reorganization in the active Banda Arc-continent collision: Insights from new GPS measurements

    NASA Astrophysics Data System (ADS)

    Nugroho, Hendro; Harris, Ron; Lestariya, Amin W.; Maruf, Bilal

    2009-12-01

    New GPS measurements reveal that large sections of the SE Asian Plate are progressively accreting to the edge of the Australian continent by distribution of strain away from the deformation front to forearc and backarc plate boundary segments. The study was designed to investigate relative motions across suspected plate boundary segments in the transition from subduction to collision. The oblique nature of the collision provides a way to quantify the spatial and temporal distribution of strain from the deformation front to the back arc. The 12 sites we measured from Bali to Timor included some from an earlier study and 7 additional stations, which extended the epoch of observation to ten years at many sites. The resulting GPS velocity field delineates at least three Sunda Arc-forearc regions around 500 km in strike-length that shows different amounts of coupling to the Australian Plate. Movement of these regions relative to SE Asia increases from 21% to 41% to 63% eastward toward the most advanced stages of collision. The regions are bounded by the deformation front to the south, the Flores-Wetar backarc thrust system to the north, and poorly defined structures on the sides. The suture zone between the NW Australian continental margin and the Sunda-Banda Arcs is still evolving with more than 20 mm/yr of movement measured across the Timor Trough deformation front between Timor and Australia.

  10. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    NASA Astrophysics Data System (ADS)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  11. Dosimetric Impact of Using the Acuros XB Algorithm for Intensity Modulated Radiation Therapy and RapidArc Planning in Nasopharyngeal Carcinomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kan, Monica W.K., E-mail: kanwkm@ha.org.hk; Department of Physics and Materials Science, City University of Hong Kong, Hong Kong; Leung, Lucullus H.T.

    2013-01-01

    Purpose: To assess the dosimetric implications for the intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy with RapidArc (RA) of nasopharyngeal carcinomas (NPC) due to the use of the Acuros XB (AXB) algorithm versus the anisotropic analytical algorithm (AAA). Methods and Materials: Nine-field sliding window IMRT and triple-arc RA plans produced for 12 patients with NPC using AAA were recalculated using AXB. The dose distributions to multiple planning target volumes (PTVs) with different prescribed doses and critical organs were compared. The PTVs were separated into components in bone, air, and tissue. The change of doses by AXB duemore » to air and bone, and the variation of the amount of dose changes with number of fields was also studied using simple geometric phantoms. Results: Using AXB instead of AAA, the averaged mean dose to PTV{sub 70} (70 Gy was prescribed to PTV{sub 70}) was found to be 0.9% and 1.2% lower for IMRT and RA, respectively. It was approximately 1% lower in tissue, 2% lower in bone, and 1% higher in air. The averaged minimum dose to PTV{sub 70} in bone was approximately 4% lower for both IMRT and RA, whereas it was approximately 1.5% lower for PTV{sub 70} in tissue. The decrease in target doses estimated by AXB was mostly contributed from the presence of bone, less from tissue, and none from air. A similar trend was observed for PTV{sub 60} (60 Gy was prescribed to PTV{sub 60}). The doses to most serial organs were found to be 1% to 3% lower and to other organs 4% to 10% lower for both techniques. Conclusions: The use of the AXB algorithm is highly recommended for IMRT and RapidArc planning for NPC cases.« less

  12. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research.

    PubMed

    Ababneh, Sufyan Y; Prescott, Jeff W; Gurcan, Metin N

    2011-08-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post-processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of 0.99 and an average segmentation accuracy of 0.95 using the Dice similarity index. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    PubMed

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.

  14. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  15. A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bensaid, Amine M.; Clarke, Laurence P.; Velthuizen, Robert P.; Silbiger, Martin S.; Bezdek, James C.

    1992-01-01

    Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms and a supervised computational neural network, a dynamic multilayered perception trained with the cascade correlation learning algorithm. Initial clinical results are presented on both normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. However, for a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed.

  16. On a methodology for robust segmentation of nonideal iris images.

    PubMed

    Schmid, Natalia A; Zuo, Jinyu

    2010-06-01

    Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.

  17. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

    PubMed Central

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-01-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598

  18. Compositional Variations of Paleogene and Neogene Tephra From the Northern Izu-Bonin-Mariana Arc

    NASA Astrophysics Data System (ADS)

    Tepley, F. J., III; Barth, A. P.; Brandl, P. A.; Hickey-Vargas, R.; Jiang, F.; Kanayama, K.; Kusano, Y.; Li, H.; Marsaglia, K. M.; McCarthy, A.; Meffre, S.; Savov, I. P.; Yogodzinski, G. M.

    2014-12-01

    A primary objective of IODP Expedition 351 was to evaluate arc initiation processes of the Izu-Bonin-Mariana (IBM) volcanic arc and its compositional evolution through time. To this end, a single thick section of sediment overlying oceanic crust was cored in the Amami Sankaku Basin where a complete sediment record of arc inception and evolution is preserved. This sediment record includes ash and pyroclasts, deposited in fore-arc, arc, and back-arc settings, likely associated with both the ~49-25 Ma emergent IBM volcanic arc and the evolving Ryukyu-Kyushu volcanic arc. Our goal was to assess the major element evolution of the nascent and evolving IBM system using the temporally constrained record of the early and developing system. In all, more than 100 ash and tuff layers, and pyroclastic fragments were selected from temporally resolved portions of the core, and from representative fractions of the overall core ("core catcher"). The samples were prepared to determine major and minor element compositions via electron microprobe analyses. This ash and pyroclast record will allow us to 1) resolve the Paleogene evolutionary history of the northern IBM arc in greater detail; 2) determine compositional variations of this portion of the IBM arc through time; 3) compare the acquired data to an extensive whole rock and tephra dataset from other segments of the IBM arc; 4) test hypotheses of northern IBM arc evolution and the involvement of different source reservoirs; and 5) mark important stratigraphic markers associated with the Neogene volcanic history of the adjacent evolving Ryukyu-Kyushu arc.

  19. Shape-Driven 3D Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875

  20. Active mask segmentation of fluorescence microscope images.

    PubMed

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  1. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  2. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  3. Three-dimensional modeling of the cochlea by use of an arc fitting approach.

    PubMed

    Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S

    2016-12-01

    A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.

  4. A Generalized Method for Automatic Downhand and Wirefeed Control of a Welding Robot and Positioner

    NASA Technical Reports Server (NTRS)

    Fernandez, Ken; Cook, George E.

    1988-01-01

    A generalized method for controlling a six degree-of-freedom (DOF) robot and a two DOF positioner used for arc welding operations is described. The welding path is defined in the part reference frame, and robot/positioner joint angles of the equivalent eight DOF serial linkage are determined via an iterative solution. Three algorithms are presented: the first solution controls motion of the eight DOF mechanism such that proper torch motion is achieved while minimizing the sum-of-squares of joint displacements; the second algorithm adds two constraint equations to achieve torch control while maintaining part orientation so that welding occurs in the downhand position; and the third algorithm adds the ability to control the proper orientation of a wire feed mechanism used in gas tungsten arc (GTA) welding operations. A verification of these algorithms is given using ROBOSIM, a NASA developed computer graphic simulation software package design for robot systems development.

  5. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  6. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm.

    PubMed

    Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib

    2008-10-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.

  7. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  8. Parameter estimation of extended free-burning electric arc within 1 kA

    NASA Astrophysics Data System (ADS)

    Sun, Qiuqin; Liu, Hao; Wang, Feng; Chen, She; Zhai, Yujia

    2018-05-01

    A long electric arc, as a common phenomenon in the power system, not only damages the electrical equipment but also threatens the safety of the system. In this work, a series of tests on a long electric arc in free air have been conducted. The arc voltage and current data were obtained, and the arc trajectories were captured using a high speed camera. The arc images were digitally processed by means of edge detection, and the length is formulated and achieved. Based on the experimental data, the characteristics of the long arc are discussed. It shows that the arc voltage waveform is close to the square wave with high-frequency components, whereas the current is almost sinusoidal. As the arc length elongates, the arc voltage and the resistance increase sharply. The arc takes a spiral shape with the effect of magnetic forces. The arc length will shorten briefly with the occurrence of the short-circuit phenomenon. Based on the classical Mayr model, the parameters of the long electric arc, including voltage gradient and time constant, with different lengths and current amplitudes are estimated using the linear least-square method. To reduce the computational error, segmentation interpolation is also employed. The results show that the voltage gradient of the long arc is mainly determined by the current amplitude but almost independent of the arc length. However, the time constant is jointly governed by these two variables. The voltage gradient of the arc with the current amplitude at 200-800 A is in the range of 3.9 V/cm-20 V/cm, and the voltage gradient decreases with the increase in current.

  9. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  10. Easy-interactive and quick psoriasis lesion segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang

    2013-12-01

    This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.

  11. Segmentation of vessels: the corkscrew algorithm

    NASA Astrophysics Data System (ADS)

    Wesarg, Stefan; Firle, Evelyn A.

    2004-05-01

    Medical imaging is nowadays much more than only providing data for diagnosis. It also links 'classical' diagnosis to modern forms of treatment such as image guided surgery. Those systems require the identification of organs, anatomical regions of the human body etc., i. e. the segmentation of structures from medical data sets. The algorithms used for these segmentation tasks strongly depend on the object to be segmented. One structure which plays an important role in surgery planning are vessels that are found everywhere in the human body. Several approaches for their extraction already exist. However, there is no general one which is suitable for all types of data or all sorts of vascular structures. This work presents a new algorithm for the segmentation of vessels. It can be classified as a skeleton-based approach working on 3D data sets, and has been designed for a reliable segmentation of coronary arteries. The algorithm is a semi-automatic extraction technique requiring the definition of the start and end the point of the (centerline) path to be found. A first estimation of the vessel's centerline is calculated and then corrected iteratively by detecting the vessel's border perpendicular to the centerline. We used contrast enhanced CT data sets of the thorax for testing our approach. Coronary arteries have been extracted from the data sets using the 'corkscrew algorithm' presented in this work. The segmentation turned out to be robust even if moderate breathing artifacts were present in the data sets.

  12. The segmentation of Thangka damaged regions based on the local distinction

    NASA Astrophysics Data System (ADS)

    Xuehui, Bi; Huaming, Liu; Xiuyou, Wang; Weilan, Wang; Yashuai, Yang

    2017-01-01

    Damaged regions must be segmented before digital repairing Thangka cultural relics. A new segmentation algorithm based on local distinction is proposed for segmenting damaged regions, taking into account some of the damaged area with a transition zone feature, as well as the difference between the damaged regions and their surrounding regions, combining local gray value, local complexity and local definition-complexity (LDC). Firstly, calculate the local complexity and normalized; secondly, calculate the local definition-complexity and normalized; thirdly, calculate the local distinction; finally, set the threshold to segment local distinction image, remove the over segmentation, and get the final segmentation result. The experimental results show that our algorithm is effective, and it can segment the damaged frescoes and natural image etc.

  13. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jinzhong; Aristophanous, Michalis, E-mail: MAristophanous@mdanderson.org; Beadle, Beth M.

    2015-09-15

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to themore » planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm{sup 3} (range, 6.6–44.3 cm{sup 3}), while the PET segmented GTV was 10.2 cm{sup 3} (range, 2.8–45.1 cm{sup 3}). The median physician-defined GTV was 22.1 cm{sup 3} (range, 4.2–38.4 cm{sup 3}). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55–0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. Conclusions: The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.« less

  14. Preliminary performance and life evaluations of a 2-kW arcjet

    NASA Technical Reports Server (NTRS)

    Morren, W. Earl; Curran, Francis M.

    1991-01-01

    The first results of a program to expand the operational envelope of low-power arcjets to higher specific impulse and power levels are presented. The performance of a kW-class laboratory model arcjet thruster was characterized at three mass flow rates of a 2:1 mixture of hydrogen and nitrogen at power levels ranging from 1.0 to 2.0 kW. This same thruster was then operated for a total of 300 h at a specific impulse and power level of 550 s and 2.0 kW, respectively, in three continuous 100-h sessions. Thruster operation during the three test segments was stable, and no measurable performance degradation was observed during the test series. Substantial cathode erosion was observed during an inspection following the second 100-h test segment. Most notable was the migration of material from the center of the cathode tip to a ring around a large crater. The anode sustained no significant damage during the endurance test segments. Some difficulty was encountered during start-up after disassembly and inspection following the second 100-h test segment, which caused constrictor erosion. This resulted in a reduced flow restriction and arc chamber pressure, which in turn caused a reduction in the arc impedance.

  15. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  16. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  17. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  18. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  19. Multitarget Tracking Studies.

    DTIC Science & Technology

    1981-07-01

    ie "n.: r c:.t. ’ur wn Le r":2- ence Indistes :nac pole a n :arc loca- 40 ":Lors !AVe a sinificant effect n convergence raes.- .ahdn te poles and...LATTICE ALGORITHMS ( LADO ) 1. The Normalized AR Lattice (ARN) This algorithm implements the normalized algorithm described in [23]. The AR coefficients are

  20. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-07

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  1. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  2. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  3. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans.

    PubMed

    Mendrik, Adriënne M; Vincken, Koen L; Kuijf, Hugo J; Breeuwer, Marcel; Bouvy, Willem H; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Persson, Mikael; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A; Vrooman, Henri A; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

  4. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  5. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  6. Direct localization of poles of a meromorphic function from measurements on an incomplete boundary

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Ando, Shigeru

    2010-01-01

    This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.

  7. Smart markers for watershed-based cell segmentation.

    PubMed

    Koyuncu, Can Fahrettin; Arslan, Salim; Durmaz, Irem; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2012-01-01

    Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.

  8. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  9. Knowledge-based low-level image analysis for computer vision systems

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  10. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  11. Melanesian arc far-field response to collision of the Ontong Java Plateau: Geochronology and petrogenesis of the Simuku Igneous Complex, New Britain, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Holm, Robert J.; Spandler, Carl; Richards, Simon W.

    2013-09-01

    Understanding the evolution of the mid-Cenozoic Melanesian arc is critical for our knowledge of the regional tectonic development of the Australian-Pacific plate margin, yet there have been no recent studies to constrain the nature and timing of magmatic activity in this arc segment. In particular, there are currently no robust absolute age constraints at the plate margin related to either the initiation or cessation of subduction and arc magmatism. We present the first combined U-Pb zircon geochronology and geochemical investigation into the evolution of the Melanesian arc utilizing a comprehensive sample suite from the Simuku Igneous Complex of West New Britain, Papua New Guinea. Development of the embryonic island arc from at least 40 Ma and progressive arc growth was punctuated by distant collision of the Ontong Java Plateau and subduction cessation from 26 Ma. This change in subduction dynamics is represented in the Melanesian arc magmatic record by emplacement of the Simuku Porphyry Complex between 24 and 20 Ma. Petrological and geochemical affinities highlight genetic differences between 'normal' arc volcanics and adakite-like signatures of Cu-Mo mineralized porphyritic intrusives. The contemporaneous emplacement of both 'normal' arc volcanics and adakite-like porphyry intrusives may provide avenues for future research into the origin of diverse styles of arc volcanism. Not only is this one of few studies into the geology of the Melanesian arc, it is also among the first to address the distant tectono-magmatic effects of major arc/forearc collision events and subduction cessation on magmatic arcs, and also offers insight into the tectonic context of porphyry formation in island arc settings.

  12. Intra-Arc extension in Central America: Links between plate motions, tectonics, volcanism, and geochemistry

    NASA Astrophysics Data System (ADS)

    Phipps Morgan, Jason; Ranero, Cesar; Vannucchi, Paola

    2010-05-01

    This study revisits the kinematics and tectonics of Central America subduction, synthesizing observations of marine bathymetry, high-resolution land topography, current plate motions, and the recent seismotectonic and magmatic history in this region. The inferred tectonic history implies that the Guatemala-El Salvador and Nicaraguan segments of this volcanic arc have been a region of significant arc tectonic extension; extension arising from the interplay between subduction roll-back of the Cocos Plate and the ~10-15 mm/yr slower westward drift of the Caribbean plate relative to the North American Plate. The ages of belts of magmatic rocks paralleling both sides of the current Nicaraguan arc are consistent with long-term arc-normal extension in Nicaragua at the rate of ~5-10 mm/yr, in agreement with rates predicted by plate kinematics. Significant arc-normal extension can ‘hide' a very large intrusive arc-magma flux; we suggest that Nicaragua is, in fact, the most magmatically robust section of the Central American arc, and that the volume of intrusive volcanism here has been previously greatly underestimated. Yet, this flux is hidden by the persistent extension and sediment infill of the rifting basin in which the current arc sits. Observed geochemical differences between the Nicaraguan arc and its neighbors which suggest that Nicaragua has a higher rate of arc-magmatism are consistent with this interpretation. Smaller-amplitude, but similar systematic geochemical correlations between arc-chemistry and arc-extension in Guatemala show the same pattern as the even larger variations between the Nicaragua arc and its neighbors. We are also exploring the potential implications of intra-arc extension for deformation processes along the subducting plate boundary and within the forearc ‘microplate'.

  13. VizieR Online Data Catalog: Properties of giant arcs behind CLASH clusters (Xu+, 2016)

    NASA Astrophysics Data System (ADS)

    Xu, B.; Postman, M.; Meneghetti, M.; Seitz, S.; Zitrin, A.; Merten, J.; Maoz, D.; Frye, B.; Umetsu, K.; Zheng, W.; Bradley, L.; Vega, J.; Koekemoer, A.

    2018-01-01

    Giant arcs are found in the CLASH images and in simulated images that mimic the CLASH data, using an efficient automated arc-finding algorithm whose selection function has been carefully quantified. CLASH is a 524-orbit multicycle treasury program that targeted 25 massive clusters with 0.18=6.5 in 20 X-ray-selected CLASH clusters. After applying our minimum arc length criterion l>=6", the arc count drops to 81 giant arcs selected from the 20 X-ray-selected CLASH clusters. (2 data files).

  14. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    PubMed

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  15. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  16. Finding topological center of a geographic space via road network

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Miao, Yanan; Qin, Yuhao; Zhao, Xiaomei; Gao, Zi-You

    2015-02-01

    Previous studies show that the center of a geographic space is of great importance in urban and regional studies, including study of population distribution, urban growth modeling, and scaling properties of urban systems, etc. But how to well define and how to efficiently extract the center of a geographic space are still largely unknown. Recently, Jiang et al. have presented a definition of topological center by their block detection (BD) algorithm. Despite the fact that they first introduced the definition and discovered the 'true center', in human minds, their algorithm left several redundancies in its traversal process. Here, we propose an alternative road-cycle detection (RCD) algorithm to find the topological center, which extracts the outmost road-cycle recursively. To foster the application of the topological center in related research fields, we first reproduce the BD algorithm in Python (pyBD), then implement the RCD algorithm in two ways: the ArcPy implementation (arcRCD) and the Python implementation (pyRCD). After the experiments on twenty-four typical road networks, we find that the results of our RCD algorithm are consistent with those of Jiang's BD algorithm. We also find that the RCD algorithm is at least seven times more efficient than the BD algorithm on all the ten typical road networks.

  17. Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.

    2008-03-01

    Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM and SNR. The performances of these algorithms show that they have an appreciable amount of potential in helping ophthalmologists detect the severity of eye-related diseases and prevent vision loss.

  18. Tracking trade transactions in water resource systems: A node-arc optimization formulation

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Huskova, Ivana; Harou, Julien J.

    2013-05-01

    We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).

  19. Two-Phase and Graph-Based Clustering Methods for Accurate and Efficient Segmentation of Large Mass Spectrometry Images.

    PubMed

    Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine

    2017-11-07

    Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.

  20. Discovering shared segments on the migration route of the bar-headed goose by time-based plane-sweeping trajectory clustering

    USGS Publications Warehouse

    Luo, Ze; Baoping, Yan; Takekawa, John Y.; Prosser, Diann J.

    2012-01-01

    We propose a new method to help ornithologists and ecologists discover shared segments on the migratory pathway of the bar-headed geese by time-based plane-sweeping trajectory clustering. We present a density-based time parameterized line segment clustering algorithm, which extends traditional comparable clustering algorithms from temporal and spatial dimensions. We present a time-based plane-sweeping trajectory clustering algorithm to reveal the dynamic evolution of spatial-temporal object clusters and discover common motion patterns of bar-headed geese in the process of migration. Experiments are performed on GPS-based satellite telemetry data from bar-headed geese and results demonstrate our algorithms can correctly discover shared segments of the bar-headed geese migratory pathway. We also present findings on the migratory behavior of bar-headed geese determined from this new analytical approach.

  1. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  2. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  3. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

  4. Seismotectonic analysis of the Andaman Sea region from high-precision teleseismic double-difference locations

    NASA Astrophysics Data System (ADS)

    Diehl, T.; Waldhauser, F.; Schaff, D. P.; Engdahl, E. R.

    2009-12-01

    The Andaman Sea region in the Northeast Indian Ocean is characterized by a complex extensional back-arc basin, which connects the Sumatra Fault System in the south with the Sagaing fault in the north. The Andaman back-arc is generally classified as a convergent pull-apart basin (leaky-transform) rather than a typical extensional back-arc basin. Oblique subduction of the Indian-Australian plate results in strike-slip faulting parallel to the trench axis, formation of a sliver plate and back-arc pull-apart extension. Active spreading occurs predominately along a NE-SW oriented ridge-segment bisecting the Central Andaman basin at the SW end of the back-arc. Existing models of the Andaman back-arc system are mainly derived from bathymetry maps, seismic surveys, magnetic anomalies, and seismotectonic analysis. The latter are typically based on global bulletin locations provided by the NEIC or ISC. These bulletin locations, however, usually have low spatial resolution (especially in focal depth), which hampers a detailed seismotectonic interpretation. In order to better study the seismotectonic processes of the Andaman Sea region, specifically its role during the recent 2004 M9.3 earthquake, we improve on existing hypocenter locations by apply the double-difference algorithm to regional and teleseismic data. Differential times used for the relocation process are computed from phase picks listed in the ISC and NEIC bulletins, and from cross-correlating regional and teleseismic waveforms. EHB hypocenter solutions are used as reference locations to improve the initial locations in the ISC/NEIC catalog during double-difference processing. The final DD solutions show significantly reduced scatter in event locations along the back arc ridge. The various observed focal mechanisms tend to cluster by type and, in addition, the structure and orientation of individual clusters are generally consistent with available CMT solutions for individual events and reveal the detailed distribution of predominantly normal, strike slip, and dip slip faulting associated with the extensional tectonics that dominate the Andaman Sea. The refined plate boundary, together with recent high-resolution bathymetry and seismic-survey data in the Central Andaman basin, are interpreted with respect to the dynamics and evolution of the back arc system. A spatio-temporal analysis of the two largest swarms (NE of Nicobar Islands in January 2005 and in the Central basin in March 2006) shows that events align along NE-SW oriented structures, with events migrating in time from NE to SW in both swarms. The SW propagation of seismogenic faults may indicate magmatic intrusion or spreading events that originate from sources that locate northeast of the swarms. The detailed analysis of the geometry and temporal evolution of these swarms allow for improved estimates of the regional stress field of the back-arc system and a better understanding of its dynamic behaviour following the December 2004 Mw 9.3 earthquake.

  5. ARC-1989-A89-7004

    NASA Image and Video Library

    1989-08-19

    Range : 8.6 million kilometers (5.3 million miles) The Voyager took this 61 second exposure through the clear filter with the narrow angle camera of Neptune. The Voyager cameras were programmed to make a systematic search for faint ring arcs and new satellites. The bright upper corner of the image is due to a residual image from a previous long exposure of the planet. The portion of the arc visible here is approximately 35 degrees in longitudinal extent, making it approximately 38,000 kilometers (24,000 miles) in length, and is broken up into three segments separated from each other by approximately 5 degrees. The trailing edge is at the upper right and has an abrupt end while the leading edge seems to fade into the background more gradually. This arc orbits very close to one of the newly discovered Neptune satellites, 1989N4. Close-up studies of this ring arc will be carried out in the coming days which will give higher spatial resolution at different lighting angles. (JPL Ref: P-34617)

  6. Crustal strain partitioning and the associated earthquake hazard in the eastern Sunda-Banda Arc

    NASA Astrophysics Data System (ADS)

    Koulali, A.; Susilo, S.; McClusky, S.; Meilano, I.; Cummins, P.; Tregoning, P.; Lister, G.; Efendi, J.; Syafi'i, M. A.

    2016-03-01

    We use Global Positioning System (GPS) measurements of surface deformation to show that the convergence between the Australian Plate and Sunda Block in eastern Indonesia is partitioned between the megathrust and a continuous zone of back-arc thrusting extending 2000 km from east Java to north of Timor. Although deformation in this back-arc region has been reported previously, its extent and the mechanism of convergence partitioning have hitherto been conjectural. GPS observations establish that partitioning occurs via a combination of anticlockwise rotation of an arc segment called the Sumba Block, and left-lateral movement along a major NE-SW strike-slip fault west of Timor. We also identify a westward extension of the back-arc thrust for 300 km onshore into East Java, accommodating slip of ˜6 mm/yr. These results highlight a major new seismic threat for East Java and draw attention to the pronounced seismic and tsunami threat to Bali, Lombok, Nusa Tenggara, and other coasts along the Flores Sea.

  7. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  8. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  9. New algorithm for controlling electric arc furnaces using their vibrational and acoustic characteristics

    NASA Astrophysics Data System (ADS)

    Cherednichenko, V. S.; Bikeev, R. A.; Serikov, V. A.; Rechkalov, A. V.; Cherednichenko, A. V.

    2016-12-01

    The processes occurring in arc discharges are analyzed as the sources of acoustic radiation in an electric arc furnace (EAF). Acoustic vibrations are shown to transform into mechanical vibrations in the furnace laboratory. The shielding of the acoustic energy fluxes onto water-cooled wall panels by a charge is experimentally studied. It is shown that the rate of charge melting and the depth of submergence of arc discharges in the slag and metal melt can be monitored by measuring the vibrational characteristics of furnaces and using them in a universal industrial process-control system, which was developed for EAFs.

  10. A system for routing arbitrary directed graphs on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1987-01-01

    There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.

  11. Comments on Samal and Henderson: Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swain, M.J.

    Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less

  12. Automatic characterization and segmentation of human skin using three-dimensional optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko

    2006-03-01

    A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.

  13. Global Linking of Cell Tracks Using the Viterbi Algorithm

    PubMed Central

    Jaldén, Joakim; Gilbert, Penney M.; Blau, Helen M.

    2016-01-01

    Automated tracking of living cells in microscopy image sequences is an important and challenging problem. With this application in mind, we propose a global track linking algorithm, which links cell outlines generated by a segmentation algorithm into tracks. The algorithm adds tracks to the image sequence one at a time, in a way which uses information from the complete image sequence in every linking decision. This is achieved by finding the tracks which give the largest possible increases to a probabilistically motivated scoring function, using the Viterbi algorithm. We also present a novel way to alter previously created tracks when new tracks are created, thus mitigating the effects of error propagation. The algorithm can handle mitosis, apoptosis, and migration in and out of the imaged area, and can also deal with false positives, missed detections, and clusters of jointly segmented cells. The algorithm performance is demonstrated on two challenging datasets acquired using bright-field microscopy, but in principle, the algorithm can be used with any cell type and any imaging technique, presuming there is a suitable segmentation algorithm. PMID:25415983

  14. Presurgical nasoalveolar molding for cleft lip and palate: the application of digitally designed molds.

    PubMed

    Shen, Congcong; Yao, Caroline A; Magee, William; Chai, Gang; Zhang, Yan

    2015-06-01

    The authors present a novel nasoalveolar molding protocol by prefabricating sets of nasoalveolar molding appliances using three-dimensional technology. Prospectively, 17 infants with unilateral complete cleft lip and palate underwent the authors' protocol before primary cheiloplasty. An initial nasoalveolar molding appliance was created based on the patient's first and only in-person maxillary cast, produced from a traditional intraoral dental impression. Thereafter, each patient's molding course was simulated using computer software that aimed to narrow the alveolar gap by 1 mm each week by rotating the greater alveolar segment. A maxillary cast of each predicted molding stage was created using three-dimensional printing. Subsequent appliances were constructed in advance, based on the series of computer-generated casts. Each patient had a total three clinic visits spaced 1 month apart. Anthropometric measurements and bony segment volumes were recorded before and after treatment. Alveolar cleft widths narrowed significantly (p < 0.01), soft-tissue volume of each segment expanded (p < 0.01), and the arc of the alveolus became more contiguous across the cleft (p < 0.01). One patient required a new appliance at the second visit because of bleeding and discomfort. Eleven patients had mucosal irritation and two experienced minor mucosal ulceration. Three-dimensional technology can precisely represent anatomic structures in pediatric clefts. Results from the authors' algorithm are equivalent to those of traditional nasoalveolar molding therapies; however, the number of required clinic visits and appliance adjustments decreased. As three-dimensional technology costs decrease, multidisciplinary teams may design customized nasoalveolar molding treatment with improved efficiency and less burden to medical staff, patients, and families. Therapeutic, IV.

  15. Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics.

    PubMed

    Jurling, Alden S; Fienup, James R

    2014-03-01

    Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.

  16. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts.

    PubMed

    García-Lorenzo, Daniel; Lecoeur, Jeremy; Arnold, Douglas L; Collins, D Louis; Barillot, Christian

    2009-01-01

    Graph Cuts have been shown as a powerful interactive segmentation technique in several medical domains. We propose to automate the Graph Cuts in order to automatically segment Multiple Sclerosis (MS) lesions in MRI. We replace the manual interaction with a robust EM-based approach in order to discriminate between MS lesions and the Normal Appearing Brain Tissues (NABT). Evaluation is performed in synthetic and real images showing good agreement between the automatic segmentation and the target segmentation. We compare our algorithm with the state of the art techniques and with several manual segmentations. An advantage of our algorithm over previously published ones is the possibility to semi-automatically improve the segmentation due to the Graph Cuts interactive feature.

  17. Regional Variations in Aleutian Magma Composition

    NASA Astrophysics Data System (ADS)

    Nye, C. J.

    2008-12-01

    This study is based on sample data spanning 20 years from USGS, UAF, and DGGS geologists too numerous to list here. The 2900-km long Aleutian arc contains more than 50 active and over 90 Holocene volcanoes. The arc is built on oceanic Bering-sea floor west of 166W and quasi-continental crust east of 166W. Over the past twenty years the Alaska Volcano Observatory has conducted baseline geologic mapping (or remapping) and volcanic-hazards studies of selected volcanoes - generally those targeted for geophysical monitoring. This marks the largest sustained effort to study Aleutian volcanoes in half a century; AVO scientists have logged as many as 700 person-days per field season. Geologic studies have resulted in comprehensive suites of stratigraphically constrained samples and more than 3500 new whole-rock analyses by XRF and ICP/MS from more than 30 centers, more than doubling the number of previously published analyses. Examination of the data for regional and inter-volcano variations yields a number of first-order observations. (1) The arc can be broadly divided into an eastern segment (east of 158W) of calcalkaline andesite stratocones; a central segment dominated by large, mafic, tholeiitic shield volcanoes and stratocones; and a western segment (west of 175W) of smaller volcanoes with variable morphologies and generally more andesitic compositions. (2) There are NO significant first-order compositional signals that coincide with the transition from oceanic to continental basement. (3) Individual volcanoes are often subtly distinct from neighbors, and those distinctions persist for the lifetime of the centers. (4) All centers, notably including the large basaltic centers of the central arc, are strongly affected by open-system processes significantly more complicated than mixing among sibling-fractionates of parental mafic magmas. (5) Petrogenetic pathways are long-lived; individual batches of magma are (generally) not. (6) Calcalkaline andesites have dramatically lower REE and HFSE, yet higher Cr and Ni than tholeiitic andesites, suggesting that it is overly simplistic to consider calcalkaline andesites to be simple fractionates of basalts.

  18. Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Dehmeshki, Jamshid

    2014-04-01

    Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.

  19. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  20. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  1. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  2. Improving graph-based OCT segmentation for severe pathology in retinitis pigmentosa patients

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Carass, Aaron; Bittner, Ava K.; Ying, Howard S.; Prince, Jerry L.

    2017-03-01

    Three dimensional segmentation of macular optical coherence tomography (OCT) data of subjects with retinitis pigmentosa (RP) is a challenging problem due to the disappearance of the photoreceptor layers, which causes algorithms developed for segmentation of healthy data to perform poorly on RP patients. In this work, we present enhancements to a previously developed graph-based OCT segmentation pipeline to enable processing of RP data. The algorithm segments eight retinal layers in RP data by relaxing constraints on the thickness and smoothness of each layer learned from healthy data. Following from prior work, a random forest classifier is first trained on the RP data to estimate boundary probabilities, which are used by a graph search algorithm to find the optimal set of nine surfaces that fit the data. Due to the intensity disparity between normal layers of healthy controls and layers in various stages of degeneration in RP patients, an additional intensity normalization step is introduced. Leave-one-out validation on data acquired from nine subjects showed an average overall boundary error of 4.22 μm as compared to 6.02 μm using the original algorithm.

  3. Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting

    NASA Astrophysics Data System (ADS)

    Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang

    2016-12-01

    Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.

  4. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    PubMed

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  5. An allotment planning concept and related computer software for planning the fixed satellite service at the 1988 space WARC

    NASA Technical Reports Server (NTRS)

    Miller, Edward F.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Whyte, Wayne A., Jr.

    1987-01-01

    The authors describe a two-phase approach to allotment planning suitable for use in planning the fixed satellite service at the 1988 Space World Administrative radio Conference (ORB-88). The two phases are (1) the identification of predetermined geostationary arc segments common to groups of administrations and (2) the use of a synthesis program to identify example scenarios of space station placements. The planning approach is described in detail and is related to the objectives of the conference. Computer software has been developed to implement the concepts, and the logic and rationale for identifying predetermined arc segments is discussed. Example scenarios are evaluated to give guidance in the selection of the technical characteristics of space communications systems to be planned. The allotment planning concept described guarantees equitable access to the geostationary orbit, provides flexibility in implementation, and reduces the need for coordination among administrations.

  6. An allotment planning concept and related computer software for planning the fixed satellite service at the 1988 space WARC

    NASA Technical Reports Server (NTRS)

    Miller, Edward F.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Whyte, Wayne A., Jr.; Zuzek, John E.

    1987-01-01

    Described is a two-phase approach to allotment planning suitable for use in establishing the fixed satellite service at the 1988 Space World Administrative Radio Conference (ORB-88). The two phases are (1) the identification of predetermined geostationary arc segments common togroups of administrations, and (2) the use of a synthesis program to identify example scenarios of space station placements. The planning approach is described in detail and is related to the objectives of the confernece. Computer software has been developed to implement the concepts, and a complete discussion on the logic and rationale for identifying predetermined arc segments is given. Example scenarios are evaluated to give guidance in the selection of the technical characteristics of space communications systems to be planned. The allotment planning concept described guarantees in practice equitable access to the geostationary orbit, provides flexibility in implementation, and reduces the need for coordination among administrations.

  7. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  8. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    NASA Astrophysics Data System (ADS)

    Leibfarth, S.; Eckert, F.; Welz, S.; Siegel, C.; Schmidt, H.; Schwenzer, N.; Zips, D.; Thorwarth, D.

    2015-07-01

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only.

  9. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  10. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    PubMed

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.

  11. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  12. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  13. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  14. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  15. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    PubMed Central

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-01-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935

  16. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.

    PubMed

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-07

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  17. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action

    PubMed Central

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter

    2018-01-01

    Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works. PMID:29746490

  18. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  19. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  20. A robustness test of the braided device foreshortening algorithm

    NASA Astrophysics Data System (ADS)

    Moyano, Raquel Kale; Fernandez, Hector; Macho, Juan M.; Blasco, Jordi; San Roman, Luis; Narata, Ana Paula; Larrabide, Ignacio

    2017-11-01

    Different computational methods have been recently proposed to simulate the virtual deployment of a braided stent inside a patient vasculature. Those methods are primarily based on the segmentation of the region of interest to obtain the local vessel morphology descriptors. The goal of this work is to evaluate the influence of the segmentation quality on the method named "Braided Device Foreshortening" (BDF). METHODS: We used the 3DRA images of 10 aneurysmatic patients (cases). The cases were segmented by applying a marching cubes algorithm with a broad range of thresholds in order to generate 10 surface models each. We selected a braided device to apply the BDF algorithm to each surface model. The range of the computed flow diverter lengths for each case was obtained to calculate the variability of the method against the threshold segmentation values. RESULTS: An evaluation study over 10 clinical cases indicates that the final length of the deployed flow diverter in each vessel model is stable, shielding maximum difference of 11.19% in vessel diameter and maximum of 9.14% in the simulated stent length for the threshold values. The average coefficient of variation was found to be 4.08 %. CONCLUSION: A study evaluating how the threshold segmentation affects the simulated length of the deployed FD, was presented. The segmentation algorithm used to segment intracranial aneurysm 3D angiography images presents small variation in the resulting stent simulation.

  1. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  2. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  3. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  4. Anatomy-based algorithm for automatic segmentation of human diaphragm in noncontrast computed tomography images

    PubMed Central

    Karami, Elham; Wang, Yong; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2016-01-01

    Abstract. In-depth understanding of the diaphragm’s anatomy and physiology has been of great interest to the medical community, as it is the most important muscle of the respiratory system. While noncontrast four-dimensional (4-D) computed tomography (CT) imaging provides an interesting opportunity for effective acquisition of anatomical and/or functional information from a single modality, segmenting the diaphragm in such images is very challenging not only because of the diaphragm’s lack of image contrast with its surrounding organs but also because of respiration-induced motion artifacts in 4-D CT images. To account for such limitations, we present an automatic segmentation algorithm, which is based on a priori knowledge of diaphragm anatomy. The novelty of the algorithm lies in using the diaphragm’s easy-to-segment contacting organs—including the lungs, heart, aorta, and ribcage—to guide the diaphragm’s segmentation. Obtained results indicate that average mean distance to the closest point between diaphragms segmented using the proposed technique and corresponding manual segmentation is 2.55±0.39  mm, which is favorable. An important feature of the proposed technique is that it is the first algorithm to delineate the entire diaphragm. Such delineation facilitates applications, where the diaphragm boundary conditions are required such as biomechanical modeling for in-depth understanding of the diaphragm physiology. PMID:27921072

  5. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction

    PubMed Central

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636

  6. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.

    PubMed

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.

  7. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  8. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  9. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-01-01

    A huge quantity of too-short-arc (TSA) observational data have been obtained in sky surveys of space objects. However, reasonable results for the TSAs can hardly be obtained with the classical methods of initial orbit determination (IOD). In this paper, the IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selections of the optimized variables and the corresponding genetic operators for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  10. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-01-01

    The sky surveys of space objects have obtained a huge quantity of too-short-arc (TSA) observation data. However, the classical method of initial orbit determination (IOD) can hardly get reasonable results for the TSAs. The IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selection of optimizing variables as well as the corresponding genetic operator for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  11. Localization algorithms for micro-channel x-ray telescope on board SVOM space mission

    NASA Astrophysics Data System (ADS)

    Gosset, L.; Götz, D.; Osborne, J.; Willingale, R.

    2016-07-01

    SVOM is a French-Chinese space mission to be launched in 2021, whose goal is the study of Gamma-Ray Bursts, the most powerful stellar explosions in the Universe. The Micro-channel X-ray Telescope (MXT) is an X-ray focusing telescope, on board SVOM, with a field of view of 1 degree (working in the 0.2-10 keV energy band), dedicated to the rapid follow-up of the Gamma-Ray Bursts counterparts and to their precise localization (smaller than 2 arc minutes). In order to reduce the optics mass and to have an angular resolution of few arc minutes, a "lobster-Eye" configuration has been chosen. Using a numerical model of the MXT Point Spread Function (PSF) we simulated MXT observations of point sources in order to develop and test different localization algorithms to be implemented on board MXT. We included preliminary estimations of the instrumental and sky background. The algorithms on board have to be a combination of speed and precision (the brightest sources are expected to be localized at a precision better than 10 arc seconds in the MXT reference frame). We present the comparison between different methods such as barycentre, PSF fitting in one or two dimensions. The temporal performance of the algorithms is being tested using the X-ray afterglow data base of the XRT telescope on board the NASA Swift satellite.

  12. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  13. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    2013-12-15

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  14. Detection of bone disease by hybrid SST-watershed x-ray image segmentation

    NASA Astrophysics Data System (ADS)

    Sanei, Saeid; Azron, Mohammad; Heng, Ong Sim

    2001-07-01

    Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.

  15. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  16. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  17. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  18. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    PubMed

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  20. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images

    PubMed Central

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L.

    2018-01-01

    Purpose To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. Methods An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Results Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Conclusions Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Translational Relevance Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD. PMID:29302382

  1. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images.

    PubMed

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L

    2018-01-01

    To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.

  2. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    NASA Astrophysics Data System (ADS)

    Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.

    2017-02-01

    Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.

  3. Improving efficacy of metastatic tumor segmentation to facilitate early prediction of ovarian cancer patients' response to chemotherapy

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2017-02-01

    Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.

  4. Sunda-Banda Arc Transition: Marine Multichannel Seismic Profiling

    NASA Astrophysics Data System (ADS)

    Lueschen, E.; Mueller, C.; Kopp, H.; Djajadihardja, Y.; Ehrhardt, A.; Engels, M.; Lutz, R.; Planert, L.; Shulgin, A.; Working Group, S.

    2008-12-01

    After the Indian Ocean Mw 9.3 earthquake and tsunami on December 26, 2004, intensive research activities focussed on the Sunda Arc subduction system offshore Sumatra. For this area a broad database is now available interpreted in terms of plate segmentation and outer arc high evolution. In contrast, the highly active easternmost part of this subduction system, as indicated by the south of Java Mw 7.7 earthquake and tsunami on July 17, 2006, has remained almost unexplored until recently. During RV SONNE cruise SO190 from October until December 2006 almost 5000 km of marine geophysical profiles have been acquired at the eastern Sunda Arc and the transition to the Banda Arc. The SINDBAD project (Seismic and Geoacoustic Investigations along the Sunda-Banda Arc Transition) comprises 30-fold multichannel reflection seismics with a 3-km streamer, wide-angle OBH/OBS refraction seismics for deep velocity control (see poster of Shulgin et al. in this session), swath bathymetry, sediment echosounder, gravimetric and geomagnetic measurements. We present data and interpretations of several 250-380 km long, prestack depth-migrated seismic sections, perpendicular to the deformation front, based on velocity models from focussing analysis and inversion of OBH/OBS refraction data. We focus on the variability of the lower plate and the tectonic response of the overriding plate in terms of outer arc high formation and evolution, forearc basin development, accretion and erosion processes at the base of the overriding plate. The subducting Indo-Australian Plate is characterized by three segments: i) the Roo Rise with rough topography offshore eastern Java ii) the Argo Abyssal Plain with smooth oceanic crust offshore Bali, Lombok, and Sumbawa, and iii) the Scott Plateau with continental crust colliding with the Banda island arc. The forearc responds to differences in the incoming oceanic plate with the absence of a pronounced forearc basin offshore eastern Java and with development of the 4000 m deep forearc Lombok Basin offshore Bali, Lombok, and Sumbawa. The eastern termination of the Lombok Basin is formed by Sumba Island, which shows evidence for recent uplift, probably associated with the collision of the island arc with the continental Scott Plateau. The Sumba area represents the transition from subduction to collision. Our seismic profiles image the bending of the oceanic crust seaward of the trench and associated normal faulting. Landward of the trench, they image the subducting slab beneath the outer arc high, where the former bending-related normal faults appear to be reactivated as reverse faults introducing vertical displacements in the subducting slab. The accretionary prism and the outer arc high are characterized by an ocean-verging system of imbricate thrust sheets with major thrust faults connecting seafloor and detachment. Compression results in shortening and steepening of the imbricated thrust sheets building up the outer arc high. Tilted piggy-back basins and downlaps of tilted sediments in the southern Lombok forearc basin indicate ongoing uplift of the entire outer arc high, abrupt displacements, and recent tectonic activity.

  5. Caribbean basin framework, 3: Southern Central America and Colombian basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolarsky, R.A.; Mann, P.

    1991-03-01

    The authors recognize three basin-forming periods in southern Central America (Panama, Costa Rica, southern Nicaragua) that they attempt to correlate with events in the Colombian basin (Bowland, 1984): (1) Early-Late Cretaceous island arc formation and growth of the Central American island arc and Late Cretaceous formation of the Colombian basin oceanic plateau. During latest Cretaceous time, pelagic carbonate sediments blanketed the Central American island arc in Panama and Costa Rica and elevated blocks on the Colombian basin oceanic plateau; (2) middle Eocene-middle Miocene island arc uplift and erosion. During this interval, influx of distal terrigenous turbidites in most areas ofmore » Panama, Costa Rica, and the Colombian basin marks the uplift and erosion of the Central American island arc. In the Colombian basin, turbidites fill in basement relief and accumulate to thicknesses up to 2 km in the deepest part of the basin. In Costa Rica, sedimentation was concentrated in fore-arc (Terraba) and back-arc (El Limon) basins; (3) late Miocene-Recent accelerated uplift and erosion of segments of the Central American arc. Influx of proximal terrigenous turbidites and alluvial fans in most areas of Panama, Costa Rica, and the Colombian basin marks collision of the Panama arc with the South American continent (late Miocene early Pliocene) and collision of the Cocos Ridge with the Costa Rican arc (late Pleistocene). The Cocos Ridge collision inverted the Terraba and El Limon basins. The Panama arc collision produced northeast-striking left-lateral strike-slip faults and fault-related basins throughout Panama as Panama moved northwest over the Colombian basin.« less

  6. Automated segmentation of white matter fiber bundles using diffusion tensor imaging data and a new density based clustering algorithm.

    PubMed

    Kamali, Tahereh; Stashuk, Daniel

    2016-10-01

    Robust and accurate segmentation of brain white matter (WM) fiber bundles assists in diagnosing and assessing progression or remission of neuropsychiatric diseases such as schizophrenia, autism and depression. Supervised segmentation methods are infeasible in most applications since generating gold standards is too costly. Hence, there is a growing interest in designing unsupervised methods. However, most conventional unsupervised methods require the number of clusters be known in advance which is not possible in most applications. The purpose of this study is to design an unsupervised segmentation algorithm for brain white matter fiber bundles which can automatically segment fiber bundles using intrinsic diffusion tensor imaging data information without considering any prior information or assumption about data distributions. Here, a new density based clustering algorithm called neighborhood distance entropy consistency (NDEC), is proposed which discovers natural clusters within data by simultaneously utilizing both local and global density information. The performance of NDEC is compared with other state of the art clustering algorithms including chameleon, spectral clustering, DBSCAN and k-means using Johns Hopkins University publicly available diffusion tensor imaging data. The performance of NDEC and other employed clustering algorithms were evaluated using dice ratio as an external evaluation criteria and density based clustering validation (DBCV) index as an internal evaluation metric. Across all employed clustering algorithms, NDEC obtained the highest average dice ratio (0.94) and DBCV value (0.71). NDEC can find clusters with arbitrary shapes and densities and consequently can be used for WM fiber bundle segmentation where there is no distinct boundary between various bundles. NDEC may also be used as an effective tool in other pattern recognition and medical diagnostic systems in which discovering natural clusters within data is a necessity. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  8. SU-E-T-268: Differences in Treatment Plan Quality and Delivery Between Two Commercial Treatment Planning Systems for Volumetric Arc-Based Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S; Zhang, H; Zhang, B

    2015-06-15

    Purpose: To clinically evaluate the differences in volumetric modulated arc therapy (VMAT) treatment plan and delivery between two commercial treatment planning systems. Methods: Two commercial VMAT treatment planning systems with different VMAT optimization algorithms and delivery approaches were evaluated. This study included 16 clinical VMAT plans performed with the first system: 2 spine, 4 head and neck (HN), 2 brain, 4 pancreas, and 4 pelvis plans. These 16 plans were then re-optimized with the same number of arcs using the second treatment planning system. Planning goals were invariant between the two systems. Gantry speed, dose rate modulation, MLC modulation, planmore » quality, number of monitor units (MUs), VMAT quality assurance (QA) results, and treatment delivery time were compared between the 2 systems. VMAT QA results were performed using Mapcheck2 and analyzed with gamma analysis (3mm/3% and 2mm/2%). Results: Similar plan quality was achieved with each VMAT optimization algorithm, and the difference in delivery time was minimal. Algorithm 1 achieved planning goals by highly modulating the MLC (total distance traveled by leaves (TL) = 193 cm average over control points per plan), while maintaining a relatively constant dose rate (dose-rate change <100 MU/min). Algorithm 2 involved less MLC modulation (TL = 143 cm per plan), but greater dose-rate modulation (range = 0-600 MU/min). The average number of MUs was 20% less for algorithm 2 (ratio of MUs for algorithms 2 and 1 ranged from 0.5-1). VMAT QA results were similar for all disease sites except HN plans. For HN plans, the average gamma passing rates were 88.5% (2mm/2%) and 96.9% (3mm/3%) for algorithm 1 and 97.9% (2mm/2%) and 99.6% (3mm/3%) for algorithm 2. Conclusion: Both VMAT optimization algorithms achieved comparable plan quality; however, fewer MUs were needed and QA results were more robust for Algorithm 2, which more highly modulated dose rate.« less

  9. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  10. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  11. Computation of Quasi-Periodic Normally Hyperbolic Invariant Tori: Algorithms, Numerical Explorations and Mechanisms of Breakdown

    NASA Astrophysics Data System (ADS)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    We present several algorithms for computing normally hyperbolic invariant tori carrying quasi-periodic motion of a fixed frequency in families of dynamical systems. The algorithms are based on a KAM scheme presented in Canadell and Haro (J Nonlinear Sci, 2016. doi: 10.1007/s00332-017-9389-y), to find the parameterization of the torus with prescribed dynamics by detuning parameters of the model. The algorithms use different hyperbolicity and reducibility properties and, in particular, compute also the invariant bundles and Floquet transformations. We implement these methods in several 2-parameter families of dynamical systems, to compute quasi-periodic arcs, that is, the parameters for which 1D normally hyperbolic invariant tori with a given fixed frequency do exist. The implementation lets us to perform the continuations up to the tip of the quasi-periodic arcs, for which the invariant curves break down. Three different mechanisms of breakdown are analyzed, using several observables, leading to several conjectures.

  12. CT reconstruction from portal images acquired during volumetric-modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Thomas, M. D. R.; Evans, P. M.; Webb, S.

    2010-10-01

    Volumetric-modulated arc therapy (VMAT), a form of intensity-modulated arc therapy (IMAT), has become a topic of research and clinical activity in recent years. As a form of arc therapy, portal images acquired during the treatment fraction form a (partial) Radon transform of the patient. We show that these portal images, when used in a modified global cone-beam filtered backprojection (FBP) algorithm, allow a surprisingly recognizable CT-volume to be reconstructed. The possibility of distinguishing anatomy in such VMAT-CT reconstructions suggests that this could prove to be a valuable treatment position-verification tool. Further, some potential for local-tomography techniques to improve image quality is shown.

  13. Concentric tube support assembly

    DOEpatents

    Rubio, Mark F.; Glessner, John C.

    2012-09-04

    An assembly (45) includes a plurality of separate pie-shaped segments (72) forming a disk (70) around a central region (48) for retaining a plurality of tubes (46) in a concentrically spaced apart configuration. Each segment includes a support member (94) radially extending along an upstream face (96) of the segment and a plurality of annularly curved support arms (98) transversely attached to the support member and radially spaced apart from one another away from the central region for receiving respective upstream end portions of the tubes in arc-shaped spaces (100) between the arms. Each segment also includes a radial passageway (102) formed in the support member for receiving a fluid segment portion (106) and a plurality of annular passageways (104) formed in the support arms for receiving respective arm portions (108) of the fluid segment portion from the radial passageway and for conducting the respective arm portions into corresponding annular spaces (47) formed between the tubes retained by the disk.

  14. TU-CD-304-01: FEATURED PRESENTATION and BEST IN PHYSICS (THERAPY): Trajectory Modulated Arc Therapy: Development of Novel Arc Delivery Techniques Integrating Dynamic Table Motion for Extended Volume Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, E; Hoppe, R; Million, L

    2015-06-15

    Purpose: Integration of coordinated robotic table motion with inversely-planned arc delivery has the potential to resolve table-top delivery limitations of large-field treatments such as Total Body Irradiation (TBI), Total Lymphoid Irradiation (TLI), and Cranial-Spinal Irradiation (CSI). We formulate the foundation for Trajectory Modulated Arc Therapy (TMAT), and using Varian Developer Mode capabilities, experimentally investigate its practical implementation for such techniques. Methods: A MATLAB algorithm was developed for inverse planning optimization of the table motion, MLC positions, and gantry motion under extended-SSD geometry. To maximize the effective field size, delivery trajectories for TMAT TBI were formed with the table rotated atmore » 270° IEC and dropped vertically to 152.5cm SSD. Preliminary testing of algorithm parameters was done through retrospective planning analysis. Robotic delivery was programmed using custom XML scripting on the TrueBeam Developer Mode platform. Final dose was calculated using the Eclipse AAA algorithm. Initial verification of delivery accuracy was measured using OSLDs on a solid water phantom of varying thickness. Results: A comparison of DVH curves demonstrated that dynamic couch motion irradiation was sufficiently approximated by static control points spaced in intervals of less than 2cm. Optimized MLC motion decreased the average lung dose to 68.5% of the prescription dose. The programmed irradiation integrating coordinated table motion was deliverable on a TrueBeam STx linac in 6.7 min. With the couch translating under an open 10cmx20cm field angled at 10°, OSLD measurements along the midline of a solid water phantom at depths of 3, 5, and 9cm were within 3% of the TPS AAA algorithm with an average deviation of 1.2%. Conclusion: A treatment planning and delivery system for Trajectory Modulated Arc Therapy of extended volumes has been established and experimentally demonstrated for TBI. Extension to other treatment techniques such as TLI and CSI is readily achievable through the developed platform. Grant Funding by Varian Medical Systems.« less

  15. Miocene magmatism in the Bodie Hills volcanic field, California and Nevada: A long-lived eruptive center in the southern segment of the ancestral Cascades arc

    USGS Publications Warehouse

    John, David A.; du Bray, Edward A.; Blakely, Richard J.; Fleck, Robert J.; Vikre, Peter; Box, Stephen E.; Moring, Barry C.

    2012-01-01

    The Middle to Late Miocene Bodie Hills volcanic field is a >700 km2, long-lived (∼9 Ma) but episodic eruptive center in the southern segment of the ancestral Cascades arc north of Mono Lake (California, U.S.). It consists of ∼20 major eruptive units, including 4 trachyandesite stratovolcanoes emplaced along the margins of the field, and numerous, more centrally located silicic trachyandesite to rhyolite flow dome complexes. Bodie Hills volcanism was episodic with two peak periods of eruptive activity: an early period ca. 14.7–12.9 Ma that mostly formed trachyandesite stratovolcanoes and a later period between ca. 9.2 and 8.0 Ma dominated by large trachyandesite-dacite dome fields. A final period of small silicic dome emplacement occurred ca. 6 Ma. Aeromagnetic and gravity data suggest that many of the Miocene volcanoes have shallow plutonic roots that extend to depths ≥1–2 km below the surface, and much of the Bodie Hills may be underlain by low-density plutons presumably related to Miocene volcanism.Compositions of Bodie Hills volcanic rocks vary from ∼50 to 78 wt% SiO2, although rocks with <55 wt% SiO2 are rare. They form a high-K calc-alkaline series with pronounced negative Ti-P-Nb-Ta anomalies and high Ba/Nb, Ba/Ta, and La/Nb typical of subduction-related continental margin arcs. Most Bodie Hills rocks are porphyritic, commonly containing 15–35 vol% phenocrysts of plagioclase, pyroxene, and hornblende ± biotite. The oldest eruptive units have the most mafic compositions, but volcanic rocks oscillated between mafic and intermediate to felsic compositions through time. Following a 2 Ma hiatus in volcanism, postsubduction rocks of the ca. 3.6–0.1 Ma, bimodal, high-K Aurora volcanic field erupted unconformably onto rocks of the Miocene Bodie Hills volcanic field.At the latitude of the Bodie Hills, subduction of the Farallon plate is inferred to have ended ca. 10 Ma, evolving to a transform plate margin. However, volcanism in the region continued until 8 Ma without an apparent change in rock composition or style of eruption. Equidimensional, polygenetic volcanoes and the absence of dike swarms suggest a low differential horizontal stress regime throughout the lifespan of the Bodie Hills volcanic field. However, kinematic data for veins and faults in mining districts suggest a change in the stress field from transtensional to extensional approximately coincident with the inferred cessation of subduction.Numerous hydrothermal systems were operative in the Bodie Hills during the Miocene. Several large systems caused alteration of volcaniclastic rocks in areas as large as 30 km2, but these altered rocks are mostly devoid of economic mineral concentrations. More structurally focused hydrothermal systems formed large epithermal Au-Ag vein deposits in the Bodie and Aurora mining districts. Economically important hydrothermal systems are temporally related to intermediate to silicic composition domes.Rock types, major and trace element compositions, petrographic characteristics, and volcanic features of the Bodie Hills volcanic field are similar to those of other large Miocene volcanic fields in the southern segment of the ancestral Cascade arc. Relative to other parts of the ancestral arc, especially north of Lake Tahoe in northeastern California, the scarcity of mafic rocks, relatively K-rich calc-alkaline compositions, and abundance of composite dome fields in the Bodie Hills may reflect thicker crust beneath the southern ancestral arc segment. Thicker crust may have inhibited direct ascent and eruption of mafic, mantle-derived magma, instead stalling its ascent in the lower or middle crust, thereby promoting differentiation to silicic compositions and development of porphyritic textures characteristic of the southern ancestral arc segment.

  16. Figure-ground segmentation based on class-independent shape priors

    NASA Astrophysics Data System (ADS)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  17. Fully convolutional network with cluster for semantic segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  18. WE-E-213CD-08: A Novel Level Set Active Contour Algorithm Using the Jensen-Renyi Divergence for Tumor Segmentation in PET.

    PubMed

    Markel, D; Naqa, I El

    2012-06-01

    Positron emission tomography (PET) presents a valuable resource for delineating the biological tumor volume (BTV) for image-guided radiotherapy. However, accurate and consistent image segmentation is a significant challenge within the context of PET, owing to its low spatial resolution and high levels of noise. Active contour methods based on the level set methods can be sensitive to noise and susceptible to failing in low contrast regions. Therefore, this work evaluates a novel active contour algorithm applied to the task of PET tumor segmentation. A novel active contour segmentation algorithm based on maximizing the Jensen-Renyi Divergence between regions of interest was applied to the task of segmenting lesions in 7 patients with T3-T4 pharyngolaryngeal squamous cell carcinoma. The algorithm was implemented on an NVidia GEFORCE GTV 560M GPU. The cases were taken from the Louvain database, which includes contours of the macroscopically defined BTV drawn using histology of resected tissue. The images were pre-processed using denoising/deconvolution. The segmented volumes agreed well with the macroscopic contours, with an average concordance index and classification error of 0.6 ± 0.09 and 55 ± 16.5%, respectively. The algorithm in its present implementation requires approximately 0.5-1.3 sec per iteration and can reach convergence within 10-30 iterations. The Jensen-Renyi active contour method was shown to come close to and in terms of concordance, outperforms a variety of PET segmentation methods that have been previously evaluated using the same data. Further evaluation on a larger dataset along with performance optimization is necessary before clinical deployment. © 2012 American Association of Physicists in Medicine.

  19. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    NASA Astrophysics Data System (ADS)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  20. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  1. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  2. An evaluation of the signature extension approach to large area crop inventories utilizing space image data. [Kansas and North Dakota

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Cicone, R. C.; Stinson, J. L.; Balon, R. J.

    1977-01-01

    The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold.

  3. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  4. Letter to the Editor on 'Single-Arc IMRT?'.

    PubMed

    Otto, Karl

    2009-04-21

    In the note 'Single Arc IMRT?' (Bortfeld and Webb 2009 Phys. Med. Biol. 54 N9-20), Bortfeld and Webb present a theoretical investigation of static gantry IMRT (S-IMRT), single-arc IMRT and tomotherapy. Based on their assumptions they conclude that single-arc IMRT is inherently limited in treating complex cases without compromising delivery efficiency. Here we present an expansion of their work based on the capabilities of the Varian RapidArc single-arc IMRT system. Using the same theoretical framework we derive clinically deliverable single-arc IMRT plans based on these specific capabilities. In particular, we consider the range of leaf motion, the ability to rapidly and continuously vary the dose rate and the choice of collimator angle used for delivery. In contrast to the results of Bortfeld and Webb, our results show that single-arc IMRT plans can be generated that closely match the theoretical optimum. The disparity in the results of each investigation emphasizes that the capabilities of the delivery system, along with the ability of the optimization algorithm to exploit those capabilities, are of particular importance in single-arc IMRT. We conclude that, given the capabilities available with the RapidArc system, single-arc IMRT can produce complex treatment plans that are delivered efficiently (in approximately 2 min).

  5. Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics.

    PubMed

    Moccia, Sara; De Momi, Elena; El Hadji, Sara; Mattos, Leonardo S

    2018-05-01

    Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. The SRS (Segmented Rail Surface) railgun: A new approach to restrike control. [Segmented rail surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker J.V.

    1988-01-01

    A Segmented Rail Surface (SRS) structure is described that eliminates restrike arcs by progressively disconnecting segments of the rail surface after the plasma armature has passed. This technique has been demonstrated using the Los Alamos MIDI-2 railgun. Restrike was eliminated in a plasma armature acceleration experiment using metal-foil fuses as opening switches. A plasma velocity increase from 11 to 16 km/s was demonstrated using the SRS technique to eliminate the viscous drag losses associated with the restrike plasma. This technique appears to be a practical option for a laboratory launcher at present and for future multi-shot launchers if appropriate switchesmore » can be developed. 5 refs., 8 figs.« less

  7. Rocket nozzle thermal shock tests in an arc heater facility

    NASA Technical Reports Server (NTRS)

    Painter, James H.; Williamson, Ronald A.

    1986-01-01

    A rocket motor nozzle thermal structural test technique that utilizes arc heated nitrogen to simulate a motor burn was developed. The technique was used to test four heavily instrumented full-scale Star 48 rocket motor 2D carbon/carbon segments at conditions simulating the predicted thermal-structural environment. All four nozzles survived the tests without catastrophic or other structural failures. The test technique demonstrated promise as a low cost, controllable alternative to rocket motor firing. The technique includes the capability of rapid termination in the event of failure, allowing post-test analysis.

  8. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  9. Segmentation and Quantitative Analysis of Apoptosis of Chinese Hamster Ovary Cells from Fluorescence Microscopy Images.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2017-06-01

    Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.

  10. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    PubMed

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  11. Individual Rocks Segmentation in Terrestrial Laser Scanning Point Cloud Using Iterative Dbscan Algorithm

    NASA Astrophysics Data System (ADS)

    Walicka, A.; Jóźków, G.; Borkowski, A.

    2018-05-01

    The fluvial transport is an important aspect of hydrological and geomorphologic studies. The knowledge about the movement parameters of different-size fractions is essential in many applications, such as the exploration of the watercourse changes, the calculation of the river bed parameters or the investigation of the frequency and the nature of the weather events. Traditional techniques used for the fluvial transport investigations do not provide any information about the long-term horizontal movement of the rocks. This information can be gained by means of terrestrial laser scanning (TLS). However, this is a complex issue consisting of several stages of data processing. In this study the methodology for individual rocks segmentation from TLS point cloud has been proposed, which is the first step for the semi-automatic algorithm for movement detection of individual rocks. The proposed algorithm is executed in two steps. Firstly, the point cloud is classified as rocks or background using only geometrical information. Secondly, the DBSCAN algorithm is executed iteratively on points classified as rocks until only one stone is detected in each segment. The number of rocks in each segment is determined using principal component analysis (PCA) and simple derivative method for peak detection. As a result, several segments that correspond to individual rocks are formed. Numerical tests were executed on two test samples. The results of the semi-automatic segmentation were compared to results acquired by manual segmentation. The proposed methodology enabled to successfully segment 76 % and 72 % of rocks in the test sample 1 and test sample 2, respectively.

  12. On the importance of FIB-SEM specific segmentation algorithms for porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less

  13. Subduction of aseismic ridges beneath the Caribbean Plate: Implications for the tectonics and seismic potential of the northeastern Caribbean

    NASA Astrophysics Data System (ADS)

    McCann, William R.; Sykes, Lynn R.

    1984-06-01

    Normal seafloor entering the Puerto Rico and northern Lesser Antillean trenches in the northeastern Caribbean is interrupted by a series of aseismic ridges on the North and South American plates. These topographic features lie close to the expected trend of fracture zones created about 80-110 m.y. ago when this seafloor was formed at the Mid-Atlantic Ridge. The northernmost of the ridges that interact with the Lesser Antillean subduction zone, the Barracuda Ridge, intersects the arc in a region of high seismic activity. Some of this seismicity including a large shock in 1974, occurs within the overthrust plate and may be related to the deformation of the Caribbean plate as it overrides the ridge. A large bathymetric high, the Main Ridge, is oriented obliquely to the Puerto Rico trench and intersects the subduction zone north of the Virgin Islands in another cluster of seismic activity along the inner wall of the trench. Data from a seismic network in the northeastern Caribbean indicate that this intersection is also characterized by both interpolate and intraplate seismic activity. Magnetic anomalies, bathymetric trends, and the pattern of deformed sediments on the inner wall of the trench strongly suggest that the Main and Barracuda ridges are parts of a formerly continuous aseismic ridge, a segment of which has recently been overridden by the Caribbean plate. Reconstruction of mid-Miocene to Recent plate motions also suggest that at least two aseismic ridges, and possibly fragments of the Bahama Platform, have interacted with the subduction zone in the northeastern Caribbean. The introduction of these narrow segments of anomalous seafloor into the subduction zone has segmented the arc into elements about 200 km long. These ridges may act as tectonic barriers or asperities during the rupture processes involved in large earthquakes. They also leave a geologic imprint on segments of the arc with which they have interacted. A 50-km landward jump of the locus of island arc volcanism occurred in Late Miocene time along the northern half of the Lesser Antilles. We postulate that the subduction of a segment of seafloor of anomolously thick crust, being more buoyant than adjacent seafloor, resulted in a marked shoaling in the dip of the descending slab and, therefore, a shift of the locus of volcanism. In the region near western Puerto Rico and eastern Hispanolia, Plio-Pleistocene interaction with a similar feature, in this case a part of the Bahama Platform, about 3-4 m.y. ago led to a jump in the locus of subduction as evidenced by a gap in the downgoing seismic zone. That segment of the Bahama Platform interferred with the subduction process and was subsequently sutured onto the Caribbean plate when the boundary jumped about 60 km to the northeast. The maximum size of historic shallow earthquakes along the Lesser Antillean arc varies from about 7.0-7.5 in the center of the arc where the dip of the shallow part of the plate boundary is steep to 8.0-8.5 along the northern part of the arc where the dip is shallow. The interaction of anomalous seafloor, as along the northern Lesser Antilles, can lead to the development of a wider than normal zone of interplate contact and hence to earthquakes that are larger than those associated with more typical seafloor entering subduction zones. Major seismic gaps and regions of high seismic potential currently exist along the northern Lesser Antilles and to the north of Puerto Rico. Both gaps are bounded by anomalous features on the downgoing plate. The intersection of these features with the plate boundary created large asperities that may be good places to search for precursors to future large earthquakes. A great shock in 1787 may have ruptured an existing seismic gap north of Puerto Rico between 65° and 67°W. Thus that gap can be expected to eventually rupture again in a great shock and not to accommodate plate motion by totally aseismic processes.

  14. Improving CCTA-based lesions' hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation.

    PubMed

    Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran

    2017-03-01

    The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions. © 2017 American Association of Physicists in Medicine.

  15. AAA and AXB algorithms for the treatment of nasopharyngeal carcinoma using IMRT and RapidArc techniques.

    PubMed

    Kamaleldin, Maha; Elsherbini, Nader A; Elshemey, Wael M

    2017-09-27

    The aim of this study is to evaluate the impact of anisotropic analytical algorithm (AAA) and 2 reporting systems (AXB-D m and AXB-D w ) of Acuros XB algorithm (AXB) on clinical plans of nasopharyngeal patients using intensity-modulated radiotherapy (IMRT) and RapidArc (RA) techniques. Six plans of different algorithm-technique combinations are performed for 10 patients to calculate dose-volume histogram (DVH) physical parameters for planning target volumes (PTVs) and organs at risk (OARs). The number of monitor units (MUs) and calculation time are also determined. Good coverage is reported for all algorithm-technique combination plans without exceeding the tolerance for OARs. Regardless of the algorithm, RA plans persistently reported higher D 2% values for PTV-70. All IMRT plans reported higher number of MUs (especially with AXB) than did RA plans. AAA-IMRT produced the minimum calculation time of all plans. Major differences between the investigated algorithm-technique combinations are reported only for the number of MUs and calculation time parameters. In terms of these 2 parameters, it is recommended to employ AXB in calculating RA plans and AAA in calculating IMRT plans to achieve minimum calculation times at reduced number of MUs. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  16. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  17. SU-E-J-224: Multimodality Segmentation of Head and Neck Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, M; Yang, J; Beadle, B

    2014-06-01

    Purpose: Develop an algorithm that is able to automatically segment tumor volume in Head and Neck cancer by integrating information from CT, PET and MR imaging simultaneously. Methods: Twenty three patients that were recruited under an adaptive radiotherapy protocol had MR, CT and PET/CT scans within 2 months prior to start of radiotherapy. The patients had unresectable disease and were treated either with chemoradiotherapy or radiation therapy alone. Using the Velocity software, the PET/CT and MR (T1 weighted+contrast) scans were registered to the planning CT using deformable and rigid registration respectively. The PET and MR images were then resampled accordingmore » to the registration to match the planning CT. The resampled images, together with the planning CT, were fed into a multi-channel segmentation algorithm, which is based on Gaussian mixture models and solved with the expectation-maximization algorithm and Markov random fields. A rectangular region of interest (ROI) was manually placed to identify the tumor area and facilitate the segmentation process. The auto-segmented tumor contours were compared with the gross tumor volume (GTV) manually defined by the physician. The volume difference and Dice similarity coefficient (DSC) between the manual and autosegmented GTV contours were calculated as the quantitative evaluation metrics. Results: The multimodality segmentation algorithm was applied to all 23 patients. The volumes of the auto-segmented GTV ranged from 18.4cc to 32.8cc. The average (range) volume difference between the manual and auto-segmented GTV was −42% (−32.8%–63.8%). The average DSC value was 0.62, ranging from 0.39 to 0.78. Conclusion: An algorithm for the automated definition of tumor volume using multiple imaging modalities simultaneously was successfully developed and implemented for Head and Neck cancer. This development along with more accurate registration algorithms can aid physicians in the efforts to interpret the multitude of imaging information available in radiotherapy today. This project was supported by a grant by Varian Medical Systems.« less

  18. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  19. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

    PubMed

    Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2010-11-08

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.

  20. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients

    PubMed Central

    Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2010-01-01

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556

  1. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  2. An ATR architecture for algorithm development and testing

    NASA Astrophysics Data System (ADS)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  3. Weights and topology: a study of the effects of graph construction on 3D image segmentation.

    PubMed

    Grady, Leo; Jolly, Marie-Pierre

    2008-01-01

    Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.

  4. Special-effect edit detection using VideoTrails: a comparison with existing techniques

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1998-12-01

    Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.

  5. Lymph node segmentation by dynamic programming and active contours.

    PubMed

    Tan, Yongqiang; Lu, Lin; Bonde, Apurva; Wang, Deling; Qi, Jing; Schwartz, Lawrence H; Zhao, Binsheng

    2018-03-03

    Enlarged lymph nodes are indicators of cancer staging, and the change in their size is a reflection of treatment response. Automatic lymph node segmentation is challenging, as the boundary can be unclear and the surrounding structures complex. This work communicates a new three-dimensional algorithm for the segmentation of enlarged lymph nodes. The algorithm requires a user to draw a region of interest (ROI) enclosing the lymph node. Rays are cast from the center of the ROI, and the intersections of the rays and the boundary of the lymph node form a triangle mesh. The intersection points are determined by dynamic programming. The triangle mesh initializes an active contour which evolves to low-energy boundary. Three radiologists independently delineated the contours of 54 lesions from 48 patients. Dice coefficient was used to evaluate the algorithm's performance. The mean Dice coefficient between computer and the majority vote results was 83.2%. The mean Dice coefficients between the three radiologists' manual segmentations were 84.6%, 86.2%, and 88.3%. The performance of this segmentation algorithm suggests its potential clinical value for quantifying enlarged lymph nodes. © 2018 American Association of Physicists in Medicine.

  6. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  7. Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun

    2018-04-01

    The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.

  8. [InlineEquation not available: see fulltext.]-Means Based Fingerprint Segmentation with Sensor Interoperability

    NASA Astrophysics Data System (ADS)

    Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun

    2010-12-01

    A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.

  9. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  10. Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN

    NASA Astrophysics Data System (ADS)

    Pradhan, Nandita; Sinha, A. K.

    2008-03-01

    This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.

  11. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  12. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

  13. Myocardial scar segmentation from magnetic resonance images using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zabihollahy, Fatemeh; White, James A.; Ukwatta, Eranga

    2018-02-01

    Accurate segmentation of the myocardial fibrosis or scar may provide important advancements for the prediction and management of malignant ventricular arrhythmias in patients with cardiovascular disease. In this paper, we propose a semi-automated method for segmentation of myocardial scar from late gadolinium enhancement magnetic resonance image (LGE-MRI) using a convolutional neural network (CNN). In contrast to image intensitybased methods, CNN-based algorithms have the potential to improve the accuracy of scar segmentation through the creation of high-level features from a combination of convolutional, detection and pooling layers. Our developed algorithm was trained using 2,336,703 image patches extracted from 420 slices of five 3D LGE-MR datasets, then validated on 2,204,178 patches from a testing dataset of seven 3D LGE-MR images including 624 slices, all obtained from patients with chronic myocardial infarction. For evaluation of the algorithm, we compared the algorithmgenerated segmentations to manual delineations by experts. Our CNN-based method reported an average Dice similarity coefficient (DSC), precision, and recall of 94.50 +/- 3.62%, 96.08 +/- 3.10%, and 93.96 +/- 3.75% as the accuracy of segmentation, respectively. As compared to several intensity threshold-based methods for scar segmentation, the results of our developed method have a greater agreement with manual expert segmentation.

  14. Localization of the transverse processes in ultrasound for spinal curvature measurement

    NASA Astrophysics Data System (ADS)

    Kamali, Shahrokh; Ungi, Tamas; Lasso, Andras; Yan, Christina; Lougheed, Matthew; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks such as transverse processes, but as bones have reduced visibility in ultrasound imaging, skeletal landmarks are typically segmented manually, which is an exceedingly laborious and long process. We propose an automatic algorithm to segment and localize the surface of bony areas in the transverse process for scoliosis in ultrasound. METHODS: The algorithm uses cascade of filters to remove low intensity pixels, smooth the image and detect bony edges. By applying first differentiation, candidate bony areas are classified. The average intensity under each area has a correlation with the possibility of a shadow, and areas with strong shadow are kept for bone segmentation. The segmented images are used to reconstruct a 3-D volume to represent the whole spinal structure around the transverse processes. RESULTS: A comparison between the manual ground truth segmentation and the automatic algorithm in 50 images showed 0.17 mm average difference. The time to process all 1,938 images was about 37 Sec. (0.0191 Sec. / Image), including reading the original sequence file. CONCLUSION: Initial experiments showed the algorithm to be sufficiently accurate and fast for segmentation transverse processes in ultrasound for spinal curvature measurement. An extensive evaluation of the method is currently underway on images from a larger patient cohort and using multiple observers in producing ground truth segmentation.

  15. The geochemistry of host arc volcanic rocks to the Co-O epithermal gold deposit, Eastern Mindanao, Philippines

    NASA Astrophysics Data System (ADS)

    Sonntag, Iris; Kerrich, Robert; Hagemann, Steffen G.

    2011-12-01

    Mindanao is the second largest island of the Philippines and is located in the southern part of the archipelago. It comprises the suture zone between the Eurasian and the Philippine plate, which is displayed in the Philippine Mobile Belt. Eastern Mindanao is part of the Philippine Mobile Belt and outcropping rocks are mainly Eocene to Pliocene in age related to episodes of arc volcanism alternating with sedimentation. New high-precision elemental analysis of the Oligocene magma series, hosting the Co-O epithermal Au deposit, which represents an arc segment in the central part of Eastern Mindanao, revealed dominantly calc-alkaline rocks ranging in composition between basalt and dacites. Major element trends (MgO vs. TiO2 and Fe2O3) are comparable to other magmas in Central and Eastern Mindanao as well as other SW Pacific Islands such as Borneo. Rare earth and trace element distribution patterns display typical island arc signatures highlighted by the conjunction of LILE-enrichment with troughs at Nb, Ta, and Ti. Ratios of Zr/Nb in basalts vary between 17 and 39, signifying a depleted subarc mantle wedge comparable to the range of MORB, and other Indonesian island arc basalts. In basalts, Nb/Ta and Zr/Sm ratios are 12-37 and 14-27 respectively indicative of deep melts of rutile-eclogite subducted slab, as well as fluids, infiltrating the mantle wedge source of basalts. Moderate large ion lithophile element contents and low Th/La and Th/Ce ratios suggest no significant slab-derived components such as sediment or crustal fragments. The comparatively low Ce and Yb values in basalts, but also andesites and dacites, are consistent with a thin arc crust related to an intraoceanic convergent margin setting. This is further supported by Nb contents in basalts that range between 1 and 3 ppm and are within the range of modern oceanic convergent margin basalts. The range of HREE fractionation signifies that basaltic melts separated at deeper levels of the subarc wedge, possibly between the forearc and arc axis, followed by a calc-alkaline convergent margin magma suite involving shallower crustal AFC near the central arc sector. The analysed Oligocene arc segment is related to a potentially steep to intermediate dipping subduction zone in an extensional to neutral geotectonic regime. The large subduction accretion complex of the Philippine Mobile Belt provides an ideal setting for significant metal deposits during its entire evolution. This is evidenced in the Eastern Mindanao Ridge, which hosts substantial porphyry Cu and epithermal Au deposits.

  16. Graph cuts and neural networks for segmentation and porosity quantification in Synchrotron Radiation X-ray μCT of an igneous rock sample.

    PubMed

    Meneses, Anderson Alvarenga de Moura; Palheta, Dayara Bastos; Pinheiro, Christiano Jorge Gomes; Barroso, Regina Cely Rodrigues

    2018-03-01

    X-ray Synchrotron Radiation Micro-Computed Tomography (SR-µCT) allows a better visualization in three dimensions with a higher spatial resolution, contributing for the discovery of aspects that could not be observable through conventional radiography. The automatic segmentation of SR-µCT scans is highly valuable due to its innumerous applications in geological sciences, especially for morphology, typology, and characterization of rocks. For a great number of µCT scan slices, a manual process of segmentation would be impractical, either for the time expended and for the accuracy of results. Aiming the automatic segmentation of SR-µCT geological sample images, we applied and compared Energy Minimization via Graph Cuts (GC) algorithms and Artificial Neural Networks (ANNs), as well as the well-known K-means and Fuzzy C-Means algorithms. The Dice Similarity Coefficient (DSC), Sensitivity and Precision were the metrics used for comparison. Kruskal-Wallis and Dunn's tests were applied and the best methods were the GC algorithms and ANNs (with Levenberg-Marquardt and Bayesian Regularization). For those algorithms, an approximate Dice Similarity Coefficient of 95% was achieved. Our results confirm the possibility of usage of those algorithms for segmentation and posterior quantification of porosity of an igneous rock sample SR-µCT scan. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Segmentation Approach Towards Phase-Contrast Microscopic Images of Activated Sludge to Monitor the Wastewater Treatment.

    PubMed

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Lai, Koon Chun

    2017-12-01

    Image processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler's thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.

  18. GPU-based relative fuzzy connectedness image segmentation.

    PubMed

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  19. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  20. A dynamic fuzzy genetic algorithm for natural image segmentation using adaptive mean shift

    NASA Astrophysics Data System (ADS)

    Arfan Jaffar, M.

    2017-01-01

    In this paper, a colour image segmentation approach based on hybridisation of adaptive mean shift (AMS), fuzzy c-mean and genetic algorithms (GAs) is presented. Image segmentation is the perceptual faction of pixels based on some likeness measure. GA with fuzzy behaviour is adapted to maximise the fuzzy separation and minimise the global compactness among the clusters or segments in spatial fuzzy c-mean (sFCM). It adds diversity to the search process to find the global optima. A simple fusion method has been used to combine the clusters to overcome the problem of over segmentation. The results show that our technique outperforms state-of-the-art methods.

  1. Ambient occlusion - A powerful algorithm to segment shell and skeletal intrapores in computed tomography data

    NASA Astrophysics Data System (ADS)

    Titschack, J.; Baum, D.; Matsuyama, K.; Boos, K.; Färber, C.; Kahl, W.-A.; Ehrig, K.; Meinel, D.; Soriano, C.; Stock, S. R.

    2018-06-01

    During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance-map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising target for future research to further improve the results for this kind of high-end data segmentation and classification. Furthermore, the application of the developed segmentation algorithm is not restricted to X-ray (micro-)computed tomographic data but may potentially be useful for the segmentation of 3D volume data from other sources.

  2. Segmentation of ribs in digital chest radiographs

    NASA Astrophysics Data System (ADS)

    Cong, Lin; Guo, Wei; Li, Qiang

    2016-03-01

    Ribs and clavicles in posterior-anterior (PA) digital chest radiographs often overlap with lung abnormalities such as nodules, and cause missing of these abnormalities, it is therefore necessary to remove or reduce the ribs in chest radiographs. The purpose of this study was to develop a fully automated algorithm to segment ribs within lung area in digital radiography (DR) for removal of the ribs. The rib segmentation algorithm consists of three steps. Firstly, a radiograph was pre-processed for contrast adjustment and noise removal; second, generalized Hough transform was employed to localize the lower boundary of the ribs. In the third step, a novel bilateral dynamic programming algorithm was used to accurately segment the upper and lower boundaries of ribs simultaneously. The width of the ribs and the smoothness of the rib boundaries were incorporated in the cost function of the bilateral dynamic programming for obtaining consistent results for the upper and lower boundaries. Our database consisted of 93 DR images, including, respectively, 23 and 70 images acquired with a DR system from Shanghai United-Imaging Healthcare Co. and from GE Healthcare Co. The rib localization algorithm achieved a sensitivity of 98.2% with 0.1 false positives per image. The accuracy of the detected ribs was further evaluated subjectively in 3 levels: "1", good; "2", acceptable; "3", poor. The percentages of good, acceptable, and poor segmentation results were 91.1%, 7.2%, and 1.7%, respectively. Our algorithm can obtain good segmentation results for ribs in chest radiography and would be useful for rib reduction in our future study.

  3. Automatic segmentation of tumor-laden lung volumes from the LIDC database

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2012-03-01

    The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.

  4. Semi-automatic 3D lung nodule segmentation in CT using dynamic programming

    NASA Astrophysics Data System (ADS)

    Sargent, Dustin; Park, Sun Young

    2017-02-01

    We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.

  5. Lymph node segmentation on CT images by a shape model guided deformable surface methodh

    NASA Astrophysics Data System (ADS)

    Maleike, Daniel; Fabel, Michael; Tetzlaff, Ralf; von Tengg-Kobligk, Hendrik; Heimann, Tobias; Meinzer, Hans-Peter; Wolf, Ivo

    2008-03-01

    With many tumor entities, quantitative assessment of lymph node growth over time is important to make therapy choices or to evaluate new therapies. The clinical standard is to document diameters on transversal slices, which is not the best measure for a volume. We present a new algorithm to segment (metastatic) lymph nodes and evaluate the algorithm with 29 lymph nodes in clinical CT images. The algorithm is based on a deformable surface search, which uses statistical shape models to restrict free deformation. To model lymph nodes, we construct an ellipsoid shape model, which strives for a surface with strong gradients and user-defined gray values. The algorithm is integrated into an application, which also allows interactive correction of the segmentation results. The evaluation shows that the algorithm gives good results in the majority of cases and is comparable to time-consuming manual segmentation. The median volume error was 10.1% of the reference volume before and 6.1% after manual correction. Integrated into an application, it is possible to perform lymph node volumetry for a whole patient within the 10 to 15 minutes time limit imposed by clinical routine.

  6. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  7. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  8. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  9. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  10. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE PAGES

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    2017-01-01

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  11. Performance evaluation of automated segmentation software on optical coherence tomography volume data

    PubMed Central

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.

    2016-01-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849

  12. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  13. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  14. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    NASA Astrophysics Data System (ADS)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  15. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  16. Autonomous Unmanned Aerial Vehicle Rendezvous for Automated Aerial Refueling

    DTIC Science & Technology

    2007-03-01

    represents a straight line segment. It can be seen that there are ten possible combinations of arcs and line segments (RSR, RSL, LSR, LSL, LRL, RLR , SLR...SRL, RLS, and LRS). However, L. E. Dubins proved that only these six sequences are possibly optimal: RSR, RSL, LSR, LSL, LRL, and RLR [Dubins 1957...From Figure 2-5 and Figure 2-6, it can be seen that the last two cases, RLR and LRL can only be optimal when the initial point and the terminal

  17. [Surgical treatment of cancer of the left colon. True left hemicolectomy or segmental colectomy?].

    PubMed

    Rouffet, F; Fontaine, M; Zerbib, J J; Mathon, C

    1988-12-01

    Non-metastatic cancer of the left colon is still an exclusively surgical problem in 1988. The problem is to determine which type of colectomy should be performed: either a true left hemicolectomy, a long but apparently oncologically satisfactory operation, or segmental colectomy. A recent study by A.R.C. reported the same 5-year survival for these two types of operation with essentially identical postoperative mortality and morbidity. This conclusion confirms that of many studies published on this subject.

  18. Electromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-11-01

    We propose a non-iterative MUSIC (MUltiple SIgnal Classification)-type algorithm for the time-harmonic electromagnetic imaging of one or more perfectly conducting, arc-like cracks found within a homogeneous space R2. The algorithm is based on a factorization of the Multi-Static Response (MSR) matrix collected in the far-field at a single, nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition), followed by the calculation of a MUSIC cost functional expected to exhibit peaks along the crack curves each half a wavelength. Numerical experimentation from exact, noiseless and noisy data shows that this is indeed the case and that the proposed algorithm behaves in robust manner, with better results in the TM mode than in the TE mode for which one would have to estimate the normal to the crack to get the most optimal results.

  19. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.

  20. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  1. Clustering Of Left Ventricular Wall Motion Patterns

    NASA Astrophysics Data System (ADS)

    Bjelogrlic, Z.; Jakopin, J.; Gyergyek, L.

    1982-11-01

    A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.

  2. Knee cartilage extraction and bone-cartilage interface analysis from 3D MRI data sets

    NASA Astrophysics Data System (ADS)

    Tamez-Pena, Jose G.; Barbu-McInnis, Monica; Totterman, Saara

    2004-05-01

    This works presents a robust methodology for the analysis of the knee joint cartilage and the knee bone-cartilage interface from fused MRI sets. The proposed approach starts by fusing a set of two 3D MR images the knee. Although the proposed method is not pulse sequence dependent, the first sequence should be programmed to achieve good contrast between bone and cartilage. The recommended second pulse sequence is one that maximizes the contrast between cartilage and surrounding soft tissues. Once both pulse sequences are fused, the proposed bone-cartilage analysis is done in four major steps. First, an unsupervised segmentation algorithm is used to extract the femur, the tibia, and the patella. Second, a knowledge based feature extraction algorithm is used to extract the femoral, tibia and patellar cartilages. Third, a trained user corrects cartilage miss-classifications done by the automated extracted cartilage. Finally, the final segmentation is the revisited using an unsupervised MAP voxel relaxation algorithm. This final segmentation has the property that includes the extracted bone tissue as well as all the cartilage tissue. This is an improvement over previous approaches where only the cartilage was segmented. Furthermore, this approach yields very reproducible segmentation results in a set of scan-rescan experiments. When these segmentations were coupled with a partial volume compensated surface extraction algorithm the volume, area, thickness measurements shows precisions around 2.6%

  3. Automated mammographic breast density estimation using a fully convolutional network.

    PubMed

    Lee, Juhun; Nishikawa, Robert M

    2018-03-01

    The purpose of this study was to develop a fully automated algorithm for mammographic breast density estimation using deep learning. Our algorithm used a fully convolutional network, which is a deep learning framework for image segmentation, to segment both the breast and the dense fibroglandular areas on mammographic images. Using the segmented breast and dense areas, our algorithm computed the breast percent density (PD), which is the faction of dense area in a breast. Our dataset included full-field digital screening mammograms of 604 women, which included 1208 mediolateral oblique (MLO) and 1208 craniocaudal (CC) views. We allocated 455, 58, and 91 of 604 women and their exams into training, testing, and validation datasets, respectively. We established ground truth for the breast and the dense fibroglandular areas via manual segmentation and segmentation using a simple thresholding based on BI-RADS density assessments by radiologists, respectively. Using the mammograms and ground truth, we fine-tuned a pretrained deep learning network to train the network to segment both the breast and the fibroglandular areas. Using the validation dataset, we evaluated the performance of the proposed algorithm against radiologists' BI-RADS density assessments. Specifically, we conducted a correlation analysis between a BI-RADS density assessment of a given breast and its corresponding PD estimate by the proposed algorithm. In addition, we evaluated our algorithm in terms of its ability to classify the BI-RADS density using PD estimates, and its ability to provide consistent PD estimates for the left and the right breast and the MLO and CC views of the same women. To show the effectiveness of our algorithm, we compared the performance of our algorithm against a state of the art algorithm, laboratory for individualized breast radiodensity assessment (LIBRA). The PD estimated by our algorithm correlated well with BI-RADS density ratings by radiologists. Pearson's rho values of our algorithm for CC view, MLO view, and CC-MLO-averaged were 0.81, 0.79, and 0.85, respectively, while those of LIBRA were 0.58, 0.71, and 0.69, respectively. For CC view and CC-MLO averaged cases, the difference in rho values between the proposed algorithm and LIBRA showed statistical significance (P < 0.006). In addition, our algorithm provided reliable PD estimates for the left and the right breast (Pearson's ρ > 0.87) and for the MLO and CC views (Pearson's ρ = 0.76). However, LIBRA showed a lower Pearson's rho value (0.66) for both the left and right breasts for the CC view. In addition, our algorithm showed an excellent ability to separate each sub BI-RADS breast density class (statistically significant, p-values = 0.0001 or less); only one comparison pair, density 1 and density 2 in the CC view, was not statistically significant (P = 0.54). However, LIBRA failed to separate breasts in density 1 and 2 for both the CC and MLO views (P > 0.64). We have developed a new deep learning based algorithm for breast density segmentation and estimation. We showed that the proposed algorithm correlated well with BI-RADS density assessments by radiologists and outperformed an existing state of the art algorithm. © 2018 American Association of Physicists in Medicine.

  4. A graph theoretic approach to scene matching

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-01-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  5. A Paleozoic Japan-type subduction-accretion system in the Beishan orogenic collage, southern Central Asian Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Song, Dongfang; Xiao, Wenjiao; Windley, Brian F.; Han, Chunming; Tian, Zhonghua

    2015-05-01

    Magmatic arcs ascribed to oceanic lithosphere subduction played a dominant role in the construction of the accretionary Central Asian Orogenic Belt (CAOB). The Beishan orogenic collage, situated between the Tianshan Orogen to the west and the Inner Mongolia Orogen to the east, is a key area to understanding the subduction and accretionary processes of the southern CAOB. However, the nature of magmatic arcs in the Beishan and the correlation among different tectonic units along the southern CAOB are highly ambiguous. In order to investigate the subduction-accretion history of the Beishan and put a better spatial and temporal relationship among the tectonic belts along the southern CAOB, we carried out detailed field-based structural geology and LA-ICP-MS zircon U-Pb geochronological as well as geochemical studies along four cross-sections across crucial litho-tectonic units in the central segment of the Beishan, mainly focusing on the metamorphic assemblages and associated plutons and volcanic rocks. The results show that both the plutonic and volcanic rocks have geochemical characteristics similar to those of subduction-related rocks, which favors a volcanic arc setting. Zircons from all the plutonic rocks yield Phanerozoic ages and the plutons have crystallization ages ranging from 464 ± 2 Ma to 398 ± 3 Ma. Two volcanic-sedimentary rocks yield zircons with a wide age range from Phanerozoic to Precambrian with the youngest age peaks at 441 Ma and 446 Ma, estimated to be the time of formation of the volcanic rocks. These new results, combined with published data on ophiolitic mélanges from the central segment of the Beishan, favor a Japan-type subduction-accretion system in the Cambrian to Carboniferous in this part of the Paleo-Asian Ocean. The Xichangjing-Niujuanzi ophiolite probably represents a major suture zone separating different tectonic units across the Beishan orogenic collage, while the Xiaohuangshan-Jijitaizi ophiolitic mélange may represent a Carboniferous back-arc basin formed as a result of slab rollback ascribed to northward subduction of the Niujuanzi oceanic lithosphere. Subduction of this back-arc basin probably took place in the early Carboniferous, generating the widespread arc-related granitoids including adakitic plutons, and overlapping earlier arc assemblages. The Beishan orogenic collage is not the eastern extension of the Chinese Central Tianshan, but it was generated by the same north-dipping subduction system separated by the Xingxingxia transform fault, as revealed by available regional data. This contribution implies that in addition to fore-arc accretion, back-arc accretion ascribed to opening and closure of a back-arc basin may also have been a common process in the construction of the CAOB, resembling that of the Mesozoic-Cenozoic subduction-accretion system in the SW pacific.

  6. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.

  7. Optimization of Treatment Geometry to Reduce Normal Brain Dose in Radiosurgery of Multiple Brain Metastases with Single-Isocenter Volumetric Modulated Arc Therapy.

    PubMed

    Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning

    2016-09-30

    Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.

  8. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  9. An improved approach for the segmentation of starch granules in microscopic images

    PubMed Central

    2010-01-01

    Background Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules. Results We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully. Conclusions We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme. PMID:21047380

  10. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  11. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  12. Auto detection and segmentation of physical activities during a Timed-Up-and-Go (TUG) task in healthy older adults using multiple inertial sensors.

    PubMed

    Nguyen, Hung P; Ayachi, Fouaz; Lavigne-Pelletier, Catherine; Blamoutier, Margaux; Rahimi, Fariborz; Boissy, Patrick; Jog, Mandar; Duval, Christian

    2015-04-11

    Recently, much attention has been given to the use of inertial sensors for remote monitoring of individuals with limited mobility. However, the focus has been mostly on the detection of symptoms, not specific activities. The objective of the present study was to develop an automated recognition and segmentation algorithm based on inertial sensor data to identify common gross motor patterns during activity of daily living. A modified Time-Up-And-Go (TUG) task was used since it is comprised of four common daily living activities; Standing, Walking, Turning, and Sitting, all performed in a continuous fashion resulting in six different segments during the task. Sixteen healthy older adults performed two trials of a 5 and 10 meter TUG task. They were outfitted with 17 inertial motion sensors covering each body segment. Data from the 10 meter TUG were used to identify pertinent sensors on the trunk, head, hip, knee, and thigh that provided suitable data for detecting and segmenting activities associated with the TUG. Raw data from sensors were detrended to remove sensor drift, normalized, and band pass filtered with optimal frequencies to reveal kinematic peaks that corresponded to different activities. Segmentation was accomplished by identifying the time stamps of the first minimum or maximum to the right and the left of these peaks. Segmentation time stamps were compared to results from two examiners visually segmenting the activities of the TUG. We were able to detect these activities in a TUG with 100% sensitivity and specificity (n = 192) during the 10 meter TUG. The rate of success was subsequently confirmed in the 5 meter TUG (n = 192) without altering the parameters of the algorithm. When applying the segmentation algorithms to the 10 meter TUG, we were able to parse 100% of the transition points (n = 224) between different segments that were as reliable and less variable than visual segmentation performed by two independent examiners. The present study lays the foundation for the development of a comprehensive algorithm to detect and segment naturalistic activities using inertial sensors, in hope of evaluating automatically motor performance within the detected tasks.

  13. Region Segmentation in the Frequency Domain Applied to Upper Airway Real-Time Magnetic Resonance Images

    PubMed Central

    Narayanan, Shrikanth

    2009-01-01

    We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005

  14. Automated segmentation of comet assay images using Gaussian filtering and fuzzy clustering.

    PubMed

    Sansone, Mario; Zeni, Olga; Esposito, Giovanni

    2012-05-01

    Comet assay is one of the most popular tests for the detection of DNA damage at single cell level. In this study, an algorithm for comet assay analysis has been proposed, aiming to minimize user interaction and providing reproducible measurements. The algorithm comprises two-steps: (a) comet identification via Gaussian pre-filtering and morphological operators; (b) comet segmentation via fuzzy clustering. The algorithm has been evaluated using comet images from human leukocytes treated with a commonly used DNA damaging agent. A comparison of the proposed approach with a commercial system has been performed. Results show that fuzzy segmentation can increase overall sensitivity, giving benefits in bio-monitoring studies where weak genotoxic effects are expected.

  15. The Sunda-Banda Arc Transition: New Insights from Marine Multichannel Seismic Data

    NASA Astrophysics Data System (ADS)

    Mueller, C.; Kopp, H.; Djajadihardja, Y.; Engels, M.; Flueh, E.; Gaedicke, C.; Lueschen, E.; Lutz, R.; Planert, L.; Shulgin, A.; Soemantri, D. D.

    2007-12-01

    After the Indian Ocean Mw 9.3 earthquake and tsunami on December 26, 2004, intensive research activities focussed on the Sunda Arc subduction system offshore Sumatra. For this area a broad database is now available interpreted in terms of plate segmentation and outer arc high evolution. In contrast, the highly active easternmost part of this subduction system, as indicated by the south of Java Mw 7.7 earthquake and tsunami on July 17, 2006, has remained almost unexplored until recently. During RV SONNE cruise SO190 from October until December 2006 almost 5000 km of marine geophysical profiles have been acquired at the eastern Sunda Arc and the transition to the Banda Arc. The SINDBAD project (Seismic and Geoacoustic Investigations along the Sunda-Banda Arc Transition) comprises 30-fold multichannel reflection seismics with a 3-km streamer, wide-angle OBH/OBS refraction seismics for deep velocity control (see poster of Planert et al. in this session), swath bathymetry, sediment echosounder, gravimetric and geomagnetic measurements. We present data and interpretations of several 250-380 km long, prestack depth-migrated seismic sections, perpendicular to the deformation front, based on velocity models from focussing analysis and inversion of OBH/OBS refraction data. We focus on the variability of the lower plate and the tectonic response of the overriding plate in terms of outer arc high formation and evolution, forearc basin development, accretion and erosion processes at the base of the overriding plate. The subducting Indo-Australian Plate is characterized by three segments: i) the Roo Rise with rough topography offshore eastern Java ii) the Argo Abyssal Plain with smooth oceanic crust offshore Bali, Lombok, and Sumbawa, and iii) the Scott Plateau with continental crust colliding with the Banda island arc. The forearc responds to differences in the incoming oceanic plate with the absence of a pronounced forearc basin offshore eastern Java and with development of the 4000 m deep forearc Lombok Basin offshore Bali, Lombok, and Sumbawa. The eastern termination of the Lombok Basin is formed by Sumba Island, which shows evidence for recent uplift, probably associated with the collision of the island arc with the continental Scott Plateau. The Sumba area represents the transition from subduction to collision. Our seismic profiles image the bending of the oceanic crust seaward of the trench and associated normal faulting. Landward of the trench, they image the subducting slab beneath the outer arc high, where the former bending-related normal faults appear to be reactivated as reverse faults introducing vertical displacements in the subducting slab. The accretionary prism and the outer arc high are characterized by an ocean-verging system of imbricate thrust sheets with major thrust faults connecting seafloor and detachment. Compression results in shortening and steepening of the imbricated thrust sheets building up the outer arc high. Tilted piggy-back basins and downlaps of tilted sediments in the southern Lombok forearc basin indicate ongoing uplift of the entire outer arc high, abrupt displacements, and recent tectonic activity.

  16. Robust Segmentation of Overlapping Cells in Histopathology Specimens Using Parallel Seed Detection and Repulsive Level Set

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559

  17. Based on the CSI regional segmentation indoor localization algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Xi; Lin, Wei; Lan, Jingwei

    2017-08-01

    To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.

  18. [The segmentation of urinary cells--a first step in the automated processing in urine cytology (author's transl)].

    PubMed

    Liedtke, C E; Aeikens, B

    1980-01-01

    By segmentation of cell images we understand the automated decomposition of microscopic cell scenes into nucleus, plasma and background. A segmentation is achieved by using information from the microscope image and prior knowledge about the content of the scene. Different algorithms have been investigated and applied to samples of urothelial cells. A particular algorithm based on a histogram approach which can be easily implemented in hardware is discussed in more detail.

  19. Faster methods for estimating arc centre position during VAR and results from Ti-6Al-4V and INCONEL 718 alloys

    NASA Astrophysics Data System (ADS)

    Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.

    2016-07-01

    Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.

  20. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    PubMed

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

Top