Sample records for minimum segment size

  1. Size-Constrained Region Merging: A New Tool to Derive Basic Landcover Units from Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Castilla, G.

    2004-09-01

    Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.

  2. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  3. Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel

    2012-01-01

    Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.

  4. A multi-directional and multi-scale roughness filter to detect lineament segments on digital elevation models - analyzing spatial objects in R

    NASA Astrophysics Data System (ADS)

    Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke

    2016-04-01

    Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.

  5. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  6. Impacts of coronary artery eccentricity on macro-recirculation and pressure drops using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Poon, Eric; Thondapu, Vikas; Barlis, Peter; Ooi, Andrew

    2017-11-01

    Coronary artery disease remains a major cause of mortality in developed countries, and is most often due to a localized flow-limiting stenosis, or narrowing, of coronary arteries. Patients often undergo invasive procedures such as X-ray angiography and fractional flow reserve to diagnose flow-limiting lesions. Even though such diagnostic techniques are well-developed, the effects of diseased coronary segments on local flow are still poorly understood. Therefore, this study investigated the effect of irregular geometries of diseased coronary segments on the macro-recirculation and local pressure minimum regions. We employed an idealized coronary artery model with a diameter of stenosis of 75%. By systematically adjusting the eccentricity and the asymmetry of the coronary stenosis, we uncovered an increase in macro-recirculation size. Most importantly, the presence of this macro-recirculation signifies a local pressure minimum (identified by λ2 vortex identification method). This local pressure minimum has a profound effect on the pressure drops in both longitudinal and planar directions, which has implications for diagnosis and treatment of coronary artery disease. Supported by Australian Research Council LP150100233 and National Computational Infrastructure m45.

  7. [Medical image segmentation based on the minimum variation snake model].

    PubMed

    Zhou, Changxiong; Yu, Shenglin

    2007-02-01

    It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.

  8. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  9. A comparison of six software packages for evaluation of solid lung nodules using semi-automated volumetry: what is the minimum increase in size to detect growth in repeated CT examinations.

    PubMed

    de Hoop, Bartjan; Gietema, Hester; van Ginneken, Bram; Zanen, Pieter; Groenewegen, Gerard; Prokop, Mathias

    2009-04-01

    We compared interexamination variability of CT lung nodule volumetry with six currently available semi-automated software packages to determine the minimum change needed to detect the growth of solid lung nodules. We had ethics committee approval. To simulate a follow-up examination with zero growth, we performed two low-dose unenhanced CT scans in 20 patients referred for pulmonary metastases. Between examinations, patients got off and on the table. Volumes of all pulmonary nodules were determined on both examinations using six nodule evaluation software packages. Variability (upper limit of the 95% confidence interval of the Bland-Altman plot) was calculated for nodules for which segmentation was visually rated as adequate. We evaluated 214 nodules (mean diameter 10.9 mm, range 3.3 mm-30.0 mm). Software packages provided adequate segmentation in 71% to 86% of nodules (p < 0.001). In case of adequate segmentation, variability in volumetry between scans ranged from 16.4% to 22.3% for the various software packages. Variability with five to six software packages was significantly less for nodules >or=8 mm in diameter (range 12.9%-17.1%) than for nodules <8 mm (range 18.5%-25.6%). Segmented volumes of each package were compared to each of the other packages. Systematic volume differences were detected in 11/15 comparisons. This hampers comparison of nodule volumes between software packages.

  10. Image Segmentation Using Minimum Spanning Tree

    NASA Astrophysics Data System (ADS)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  11. Nucleus segmentation in histology images with hierarchical multilevel thresholding

    NASA Astrophysics Data System (ADS)

    Ahmady Phoulady, Hady; Goldgof, Dmitry B.; Hall, Lawrence O.; Mouton, Peter R.

    2016-03-01

    Automatic segmentation of histological images is an important step for increasing throughput while maintaining high accuracy, avoiding variation from subjective bias, and reducing the costs for diagnosing human illnesses such as cancer and Alzheimer's disease. In this paper, we present a novel method for unsupervised segmentation of cell nuclei in stained histology tissue. Following an initial preprocessing step involving color deconvolution and image reconstruction, the segmentation step consists of multilevel thresholding and a series of morphological operations. The only parameter required for the method is the minimum region size, which is set according to the resolution of the image. Hence, the proposed method requires no training sets or parameter learning. Because the algorithm requires no assumptions or a priori information with regard to cell morphology, the automatic approach is generalizable across a wide range of tissues. Evaluation across a dataset consisting of diverse tissues, including breast, liver, gastric mucosa and bone marrow, shows superior performance over four other recent methods on the same dataset in terms of F-measure with precision and recall of 0.929 and 0.886, respectively.

  12. Mesoscale spatial variability of selected aquatic invertebrate community metrics from a minimally impaired stream segment

    USGS Publications Warehouse

    Gebler, J.B.

    2004-01-01

    The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.

  13. Recent developments in guided wave travel time tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zon, Tim van; Volker, Arno

    The concept of predictive maintenance using permanent sensors that monitor the integrity of an installation is an interesting addition to the current method of periodic inspections. Guided wave tomography had been developed to create a map of the wall thickness using the travel times of guided waves. It can be used for both monitoring and for inspection of pipe-segments that are difficult to access, for instance at the location of pipe-supports. An important outcome of the tomography is the minimum remaining wall thickness, as this is critical in the scheduling of a replacement of the pipe-segment. In order to improvemore » the sizing accuracy we have improved the tomography scheme. A number of major improvements have been realized allowing to extend the application envelope to pipes with a larger wall thickness and to larger distances between the transducer rings. Simulation results indicate that the sizing accuracy has improved and that is now possible to have a spacing of 8 meter between the source-ring and the receiver-ring. Additionally a reduction of the number of sensors required might be possible as well.« less

  14. Surgical results of dynamic nonfusion stabilization with the Segmental Spinal Correction System for degenerative lumbar spinal diseases with instability: Minimum 2-year follow-up

    PubMed Central

    Ohta, Hideki; Matsumoto, Yoshiyuki; Morishita, Yuichirou; Sakai, Tsubasa; Huang, George; Kida, Hirotaka; Takemitsu, Yoshiharu

    2011-01-01

    Background When spinal fusion is applied to degenerative lumbar spinal disease with instability, adjacent segment disorder will be an issue in the future. However, decompression alone could cause recurrence of spinal canal stenosis because of increased instability on operated segments and lead to revision surgery. Covering the disadvantages of both procedures, we applied nonfusion stabilization with the Segmental Spinal Correction System (Ulrich Medical, Ulm, Germany) and decompression. Methods The surgical results of 52 patients (35 men and 17 women) with a minimum 2-year follow-up were analyzed: 10 patients with lumbar spinal canal stenosis, 15 with lumbar canal stenosis with disc herniation, 20 with degenerative spondylolisthesis, 6 with disc herniation, and 1 with lumbar discopathy. Results The Japanese Orthopaedic Association score was improved, from 14.4 ± 5.3 to 25.5 ± 2.8. The improvement rate was 76%. Range of motion of the operated segments was significantly decreased, from 9.6° ± 4.2° to 2.0° ± 1.8°. Only 1 patient had adjacent segment disease that required revision surgery. There was only 1 screw breakage, but the patient was asymptomatic. Conclusions Over a minimum 2-year follow-up, the results of nonfusion stabilization with the Segmental Spinal Correction System for unstable degenerative lumbar disease were good. It is necessary to follow up the cases with a focus on adjacent segment disorders in the future. PMID:25802671

  15. Complexity of Sizing for Space Suit Applications

    NASA Technical Reports Server (NTRS)

    Rajulu, Sudhakar; Benson, Elizabeth

    2009-01-01

    The `fit? of a garment is often considered to be a subjective measure of garment quality. However, some experts attest that a complaint of poor garment fit is a symptom of inadequate or excessive ease, the space between the garment and the wearer. Fit has traditionally been hard to quantify, and space suits are an extreme example, where fit is difficult to measure but crucial for safety and operability. A proper space suit fit is particularly challenging because of NASA?s need to fit an incredibly diverse population (males and females from the 1st to 99th percentile) while developing a minimum number of space suit sizes. Because so few sizes are available, the available space suits must be optimized so that each fits a large segment of the population without compromising the fit of any one wearer.

  16. Compact microchannel system

    DOEpatents

    Griffiths, Stewart

    2003-09-30

    The present invention provides compact geometries for the layout of microchannel columns through the use of turns and straight channel segments. These compact geometries permit the use of long separation or reaction columns on a small microchannel substrate or, equivalently, permit columns of a fixed length to occupy a smaller substrate area. The new geometries are based in part on mathematical analyses that provide the minimum turn radius for which column performance in not degraded. In particular, we find that straight channel segments of sufficient length reduce the required minimum turn radius, enabling compact channel layout when turns and straight segments are combined. The compact geometries are obtained by using turns and straight segments in overlapped or nested arrangements to form pleated or coiled columns.

  17. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  18. Aorta and pulmonary artery segmentation using optimal surface graph cuts in non-contrast CT

    NASA Astrophysics Data System (ADS)

    Sedghi Gamechi, Zahra; Arias-Lorza, Andres M.; Pedersen, Jesper Holst; de Bruijne, Marleen

    2018-03-01

    Accurate measurements of the size and shape of the aorta and pulmonary arteries are important as risk factors for cardiovascular diseases, and for Chronicle Obstacle Pulmonary Disease (COPD).1 The aim of this paper is to propose an automated method for segmenting the aorta and pulmonary arteries in low-dose non-ECGgated non-contrast CT scans. Low contrast and the high noise level make the automatic segmentation in such images a challenging task. In the proposed method, first, a minimum cost path tracking algorithm traces the centerline between user-defined seed points. The cost function is based on a multi-directional medialness filter and a lumen intensity similarity metric. The vessel radius is also estimated from the medialness filter. The extracted centerlines are then smoothed and dilated non-uniformly according to the extracted local vessel radius and subsequently used as initialization for a graph-cut segmentation. The algorithm is evaluated on 225 low-dose non-ECG-gated non-contrast CT scans from a lung cancer screening trial. Quantitatively analyzing 25 scans with full manual annotations, we obtain a dice overlap of 0.94+/-0.01 for the aorta and 0.92+/-0.01 for pulmonary arteries. Qualitative validation by visual inspection on 200 scans shows successful segmentation in 93% of all cases for the aorta and 94% for pulmonary arteries.

  19. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  20. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  1. 50 CFR 648.83 - Multispecies minimum fish sizes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Multispecies minimum fish sizes. 648.83... Measures for the NE Multispecies and Monkfish Fisheries § 648.83 Multispecies minimum fish sizes. (a) Minimum fish sizes. (1) Minimum fish sizes for recreational vessels and charter/party vessels that are not...

  2. Modeling polymer-induced interactions between two grafted surfaces: comparison between interfacial statistical associating fluid theory and self-consistent field theory.

    PubMed

    Jain, Shekhar; Ginzburg, Valeriy V; Jog, Prasanna; Weinhold, Jeffrey; Srivastava, Rakesh; Chapman, Walter G

    2009-07-28

    The interaction between two polymer grafted surfaces is important in many applications, such as nanocomposites, colloid stabilization, and polymer alloys. In our previous work [Jain et al., J. Chem. Phys. 128, 154910 (2008)], we showed that interfacial statistical associating fluid density theory (iSAFT) successfully calculates the structure of grafted polymer chains in the absence/presence of a free polymer. In the current work, we have applied this density functional theory to calculate the force of interaction between two such grafted monolayers in implicit good solvent conditions. In particular, we have considered the case where the segment sizes of the free (sigma(f)) and grafted (sigma(g)) polymers are different. The interactions between the two monolayers in the absence of the free polymer are always repulsive. However, in the presence of the free polymer, the force either can be purely repulsive or can have an attractive minimum depending upon the relative chain lengths of the free (N(f)) and grafted polymers (N(g)). The attractive minimum is observed only when the ratio alpha = N(f)/N(g) is greater than a critical value. We find that these critical values of alpha satisfy the following scaling relation: rho(g) square root(N(g)) beta(3) proportional to alpha(-lambda), where beta = sigma(f)/sigma(g) and lambda is the scaling exponent. For beta = 1 or the same segment sizes of the free and grafted polymers, this scaling relation is in agreement with those from previous theoretical studies using self-consistent field theory (SCFT). Detailed comparisons between iSAFT and SCFT are made for the structures of the monolayers and their forces of interaction. These comparisons lead to interesting implications for the modeling of nanocomposite thermodynamics.

  3. 50 CFR 648.103 - Minimum fish sizes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.103 Section 648... Summer Flounder Fisheries § 648.103 Minimum fish sizes. (a) The minimum size for summer flounder is 14... carrying more than five crew members. (c) The minimum sizes in this section apply to whole fish or to any...

  4. Structure for identifying, locating and quantifying physical phenomena

    DOEpatents

    Richardson, John G.

    2006-10-24

    A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.

  5. Method and apparatus for identifying, locating and quantifying physical phenomena and structure including same

    DOEpatents

    Richardson, John G.

    2006-01-24

    A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.

  6. Planning Minimum-Energy Paths in an Off-Road Environment with Anisotropic Traversal Costs and Motion Constraints

    DTIC Science & Technology

    1989-06-01

    problems, and (3) weighted-region problems. Since the minimum-energy path-planning problem addressed in this dissertation is a hybrid between the two...contains components that are strictly vehicle dependent, components that are strictly terrain dependent, and components representing a hybrid of...Single Segment Braking/Multiple Segment Hybrid Using Eq. (3.46), the traversal cost U 1,.-1 can be rewritten as Uop- 1 = mgD Itan01 , (4.12a) and the

  7. 50 CFR 648.124 - Minimum fish sizes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.124 Section 648... Scup Fishery § 648.124 Minimum fish sizes. (a) The minimum size for scup is 9 inches (22.9 cm) TL for... charter boat, or more than five crew members if a party boat. (c) The minimum size applies to whole fish...

  8. Design and fabrication of a boron reinforced intertank skirt

    NASA Technical Reports Server (NTRS)

    Henshaw, J.; Roy, P. A.; Pylypetz, P.

    1974-01-01

    Analytical and experimental studies were performed to evaluate the structural efficiency of a boron reinforced shell, where the medium of reinforcement consists of hollow aluminum extrusions infiltrated with boron epoxy. Studies were completed for the design of a one-half scale minimum weight shell using boron reinforced stringers and boron reinforced rings. Parametric and iterative studies were completed for the design of minimum weight stringers, rings, shells without rings and shells with rings. Computer studies were completed for the final evaluation of a minimum weight shell using highly buckled minimum gage skin. The detail design is described of a practical minimum weight test shell which demonstrates a weight savings of 30% as compared to an all aluminum longitudinal stiffened shell. Sub-element tests were conducted on representative segments of the compression surface at maximum stress and also on segments of the load transfer joint. A 10 foot long, 77 inch diameter shell was fabricated from the design and delivered for further testing.

  9. 50 CFR 648.93 - Monkfish minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...

  10. 50 CFR 648.93 - Monkfish minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...

  11. 50 CFR 648.93 - Monkfish minimum fish sizes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...

  12. 50 CFR 648.93 - Monkfish minimum fish sizes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...

  13. Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2

    PubMed Central

    Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.

    2014-01-01

    SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745

  14. 50 CFR 648.165 - Bluefish minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...

  15. 50 CFR 648.165 - Bluefish minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...

  16. 50 CFR 648.165 - Bluefish minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...

  17. 50 CFR 648.162 - Minimum fish sizes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.162 Section 648... Atlantic Bluefish Fishery § 648.162 Minimum fish sizes. If the Council determines through its annual review or framework adjustment process that minimum fish sizes are necessary to assure that the fishing...

  18. Pars plana Ahmed valve and vitrectomy in patients with glaucoma associated with posterior segment disease.

    PubMed

    Wallsh, Josh O; Gallemore, Ron P; Taban, Mehran; Hu, Charles; Sharareh, Behnam

    2013-01-01

    To assess the safety and efficacy of a modified technique for pars plana placement of the Ahmed valve in combination with pars plana vitrectomy in the treatment of glaucoma associated with posterior segment disease. Thirty-nine eyes with glaucoma associated with posterior segment disease underwent pars plana vitrectomy combined with Ahmed valve placement. All valves were placed in the pars plana using a modified technique, without the pars plana clip, and using a scleral patch graft. The 24 eyes diagnosed with neovascular glaucoma had an improvement in intraocular pressure from 37.6 mmHg to 13.8 mmHg and best-corrected visual acuity from 2.13 logarithm of minimum angle of resolution to 1.40 logarithm of minimum angle of resolution. Fifteen eyes diagnosed with steroid-induced glaucoma had an improvement in intraocular pressure from 27.9 mmHg to 14.1 mmHg and best-corrected visual acuity from 1.38 logarithm of minimum angle of resolution to 1.13 logarithm of minimum angle of resolution. Complications included four cases of cystic bleb formation and one case of choroidal detachment and explantation for hypotony. Ahmed valve placement through the pars plana during vitrectomy is an effective option for managing complex cases of glaucoma without the use of the pars plana clip.

  19. 50 CFR 648.126 - Scup minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...

  20. 50 CFR 648.126 - Scup minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...

  1. 50 CFR 648.126 - Scup minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...

  2. 50 CFR 648.147 - Black sea bass minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...

  3. 50 CFR 648.147 - Black sea bass minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...

  4. 50 CFR 648.147 - Black sea bass minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...

  5. 50 CFR 648.124 - Minimum fish sizes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Minimum fish sizes. 648.124 Section 648... Scup Fishery § 648.124 Minimum fish sizes. Link to an amendment published at 76 FR 60633, Sept. 29... if a party boat. (c) The minimum size applies to whole fish or any part of a fish found in possession...

  6. Irrigation market for solar thermal parabolic dish systems

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, H.; Jones, S. C.

    1981-01-01

    The potential size of the onfarm-pumped irrigation market for solar thermal parabolic dish systems in seven high-insolation states is estimated. The study is restricted to the displacement of three specific fuels: gasoline, diesel and natural gas. The model was developed to estimate the optimal number of parabolic dish modules per farm based on the minimum cost mix of conventional and solar thermal energy required to meet irrigation needs. The study concludes that the potential market size for onfarm-pumped irrigation applications ranges from 101,000 modules when a 14 percent real discount rate is assumed to 220,000 modules when the real discount rate drops to 8 percent. Arizona, Kansas, Nebraska, New Mexico and Texas account for 98 percent of the total demand for this application, with the natural gas replacement market accounting for the largest segment (71 percent) of the total market.

  7. Modeling chain folding in protein-constrained circular DNA.

    PubMed Central

    Martino, J A; Olson, W K

    1998-01-01

    An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675

  8. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  9. 50 CFR 648.72 - Minimum surf clam size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Atlantic Surf Clam and Ocean Quahog Fisheries § 648.72 Minimum surf clam size. (a) Minimum length. The minimum length for surf clams is 4.75 inches (12.065 cm). (b) Determination of compliance. No more than 50... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum surf clam size. 648.72 Section 648...

  10. Assessing SPO techniques to constrain magma flow: Examples from sills of the Karoo Igneous Province, South Africa

    NASA Astrophysics Data System (ADS)

    Hoyer, Lauren; Watkeys, Michael K.

    2015-08-01

    Shape ellipsoids that define the petrofabrics of plagioclase in Jurassic Karoo dolerite sills in KwaZulu-Natal, South Africa are rigorously constrained using the long axis lengths of plagioclase crystals and ellipse incompatibility. This has been undertaken in order to determine the most effective technique to determine petrofabrics when using the SPO-2003 programme (Launeau and Robin, 2005). The technique of segmenting an image for analysis is scrutinised and as a process is found redundant. A grain size threshold is defined to assist with the varying grain sizes observed within and between sills. Where grains exceed the 0.2 mm size threshold, images should be acquired at a high magnification (i.e., 10 × magnification). Petrofabrics are determined using the foliation and the lineation of the ellipsoid as defined by the maximum and minimum principal axes (respectively) of the resultant ellipsoid. Samples with strongly prolate fabrics are isolated allowing further constraint on the petrofabric to be made. Once the efficacy of the petrofabric determination process has been determined, the resultant foliations (and lineations) then elucidate the most accurate petrofabric attainable. The most accurate petrofabrics will be determined by using the correct magnification when the images are obtained and to run the analyses without segmenting the image. The fabrics of the upper and lower contacts of the Karoo dolerite sills are analysed in detail using these techniques and the fabrics are used as a proxy for magma flow.

  11. 49 CFR 192.903 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline... pipeline segment means a segment of gas transmission pipeline located in a high consequence area. The terms...

  12. Complete grain boundaries from incomplete EBSD maps: the influence of segmentation on grain size determinations

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Ruediger

    2017-04-01

    Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the most stable results.

  13. 48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false F.o.b. Origin-Minimum Size... Clauses 52.247-61 F.o.b. Origin—Minimum Size of Shipments. As prescribed in 47.305-16(c), insert the following clause in solicitations and contracts when volume rates may apply: F.o.b. Origin—Minimum Size of...

  14. 14 CFR 97.3 - Symbols and terms used in procedures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...

  15. 14 CFR 97.3 - Symbols and terms used in procedures.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...

  16. 49 CFR 236.1011 - PTC Implementation Plan content requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... decisions, and shall at a minimum address the following risk factors by track segment: (i) Segment traffic... implemented to address areas of greater risk to the public and railroad employees before areas of lesser risk... risk, including ruling grades and extreme curvature; (6) The following information relating to rolling...

  17. 50 CFR 648.93 - Monkfish minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General provisions. All monkfish caught by vessels issued a valid Federal monkfish permit must meet the minimum fish...

  18. 50 CFR 622.492 - Minimum size limit.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Queen Conch Resources of Puerto Rico and the U.S. Virgin Islands § 622.492 Minimum size limit. (a) The minimum size...

  19. 50 CFR 622.492 - Minimum size limit.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Queen Conch Resources of Puerto Rico and the U.S. Virgin Islands § 622.492 Minimum size limit. (a) The minimum size...

  20. Subband-Based Group Delay Segmentation of Spontaneous Speech into Syllable-Like Units

    NASA Astrophysics Data System (ADS)

    Nagarajan, T.; Murthy, H. A.

    2004-12-01

    In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.

  1. The Effects of Fault Bends on Rupture Propagation: A Parameter Study

    NASA Astrophysics Data System (ADS)

    Lozos, J. C.; Oglesby, D. D.; Duan, B.; Wesnousky, S. G.

    2008-12-01

    Segmented faults with stepovers are ubiquitous, and occur at a variety of scales, ranging from small stepovers on the San Jacinto Fault, to the large-scale stepover on of the San Andreas Fault between Tejon Pass and San Gorgonio Pass. Because this type of fault geometry is so prevalent, understanding how rupture propagates through such systems is important for evaluating seismic hazard at different points along these faults. In the present study, we systematically investigate how far rupture will propagate through a fault with a linked (i.e., continuous fault) stepover, based on the length of the linking fault segment and the angle that connects the linking segment to adjacent segments. We conducted dynamic models of such systems using a two-dimensional finite element code (Duan and Oglesby 2007). The fault system in our models consists of three segments: two parallel 10km-long faults linked at a specified angle by a linking segment of between 500 m and 5 km. This geometry was run both as a extensional system and a compressional system. We observed several distinct rupture behaviors, with systematic differences between compressional and extensional cases. Both shear directions rupture straight through the stepover for very shallow stepover angles. In compressional systems with steeper angles, rupture may jump ahead from the stepover segment onto the far segment; whether or not rupture on this segment reaches critical patch size and slips fully is also a function of angle and stepover length. In some compressional cases, if the angle is steep enough and the stepover short enough, rupture may jump over the step entirely and propagate down the far segment without touching the linking segment. In extensional systems, rupture jumps from the nucleating segment onto the linking segment even at shallow angles, but at steeper angles, rupture propagates through without jumping. It is easier to propagate through a wider range of angles in extensional cases. In both extensional and compressional cases, for each stepover length there exists a maximum angle through which rupture can fully propagate; this maximum angle decreases asymptotically to a minimum value as the stepover length increases. We also found that a wave associated with a stopping phase coming from the far end of the fault may restart rupture and induce full propagation after a significant delay in some cases where the initial rupture terminated.

  2. Decreasing transmembrane segment length greatly decreases perfringolysin O pore size

    DOE PAGES

    Lin, Qingqing; Li, Huilin; Wang, Tong; ...

    2015-04-08

    Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less

  3. Line plus arc source trajectories and their R-line coverage for long-object cone-beam imaging with a C-arm system

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Wunderlich, Adam; Dennerlein, Frank; Lauritsch, Günter; Noo, Frédéric

    2011-06-01

    Cone-beam imaging with C-arm systems has become a valuable tool in interventional radiology. Currently, a simple circular trajectory is used, but future applications should use more sophisticated source trajectories, not only to avoid cone-beam artifacts but also to allow extended volume imaging. One attractive strategy to achieve these two goals is to use a source trajectory that consists of two parallel circular arcs connected by a line segment, possibly with repetition. In this work, we address the question of R-line coverage for such a trajectory. More specifically, we examine to what extent R-lines for such a trajectory cover a central cylindrical region of interest (ROI). An R-line is a line segment connecting any two points on the source trajectory. Knowledge of R-line coverage is crucial because a general theory for theoretically exact and stable image reconstruction from axially truncated data is only known for the points in the scanned object that lie on R-lines. Our analysis starts by examining the R-line coverage for the elemental trajectories consisting of (i) two parallel circular arcs and (ii) a circular arc connected orthogonally to a line segment. Next, we utilize our understanding of the R-lines for the aforementioned elemental trajectories to determine the R-line coverage for the trajectory consisting of two parallel circular arcs connected by a tightly fit line segment. For this trajectory, we find that the R-line coverage is insufficient to completely cover any central ROI. Because extension of the line segment beyond the circular arcs helps to increase the R-line coverage, we subsequently propose a trajectory composed of two parallel circular arcs connected by an extended line. We show that the R-lines for this trajectory can fully cover a central ROI if the line extension is long enough. Our presentation includes a formula for the minimum line extension needed to achieve full R-line coverage of an ROI with a specified size, and also includes a preliminary study on the required detector size, showing that the R-lines added by the line extension are not constraining.

  4. 50 CFR 648.83 - Multispecies minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 22 (55.9 cm) Haddock 18 (45.7 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 14 (35.6 cm) Yellowtail flounder 13 (33.0 cm) American plaice...

  5. 50 CFR 648.83 - Multispecies minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 19 (48.3 cm) Haddock 16 (40.6 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 13 (33 cm) Yellowtail flounder 12 (30.5 cm) American plaice (dab...

  6. 50 CFR 648.83 - Multispecies minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 19 (48.3 cm) Haddock 16 (40.6 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 13 (33 cm) Yellowtail flounder 12 (30.5 cm) American plaice (dab...

  7. 50 CFR 648.83 - Multispecies minimum fish sizes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 22 (55.9 cm) Haddock 18 (45.7 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 14 (35.6 cm) Yellowtail flounder 13 (33.0 cm) American plaice...

  8. Relationship between negative differential thermal resistance and asymmetry segment size

    NASA Astrophysics Data System (ADS)

    Kong, Peng; Hu, Tao; Hu, Ke; Jiang, Zhenhua; Tang, Yi

    2018-03-01

    Negative differential thermal resistance (NDTR) was investigated in a system consisting of two dissimilar anharmonic lattices exemplified by Frenkel-Kontorova (FK) lattices and Fremi-Pasta-Ulam (FPU) lattices (FK-FPU). The previous theoretical and numerical simulations show the dependence of NDTR are the coupling constant, interface and system size, but we find the segment size also to be an important element. It’s interesting that NDTR region depends on FK segment size rather than FPU segment size in this coupling FK-FPU model. Remarkably, we could observe that NDTR appears in the strong interface coupling strength case which is not NDTR in previous studies. The results are conducive to further developments in designing and fabricating thermal devices.

  9. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  10. 50 CFR 648.233 - Minimum Fish Sizes. [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum Fish Sizes. [Reserved] 648.233 Section 648.233 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Spiny Dogfish Fishery § 648.233 Minimum Fish Sizes. [Reserved] ...

  11. Segmental analysis of respiratory liver motion in patients with and without a history of abdominal surgery.

    PubMed

    Shimizu, Yasuhiro; Takamatsu, Shigeyuki; Yamamoto, Kazutaka; Maeda, Yoshikazu; Sasaki, Makoto; Tamamura, Hiroyasu; Bou, Sayuri; Kumano, Tomoyasu; Gabata, Toshifumi

    2018-06-20

    The purpose of this study was to analyze the respiratory motion of each segment of the liver in patients with or without a history of abdominal surgery using four-dimensional computed tomography. In total, 57 patients treated for abdominal tumors using proton beam therapy were enrolled. Eighteen patients had a history of abdominal surgery and 39 did not. The positions of clearly demarcated, high-density regions in the liver were measured as evaluation points with which to quantify the motion of each liver segment according to the Couinaud classification. In total, 218 evaluation points were analyzed. Comparison of differences in the motion of individual liver segments showed that among patients without a history of surgery, the maximum was 29.0 (7.2-42.1) mm in S6 and the minimum was 15.1 (10.6-19.3) mm in S4. Among patients with a history of surgery, the maximum was 28.0 (9.0-37.4) mm in S7 and the minimum was 6.3 (4.1-9.3) mm in S3. The distances and directions of respiratory motion differed for each liver segment, and a history of abdominal surgery reduced the respiratory motion of the liver. It is necessary to selectively use the internal margin setting.

  12. Combustor liner construction

    NASA Technical Reports Server (NTRS)

    Craig, H. M.; Wagner, W. B.; Strock, W. J. (Inventor)

    1983-01-01

    A combustor liner is fabricated from a plurality of individual segments each containing counter/parallel Finwall material and are arranged circumferentially and axially to define the combustion zone. Each segment is supported by a hook and ring construction to an opened lattice frame with sufficient tolerance between the hook and ring to permit thermal expansion with a minimum of induced stresses.

  13. Spatiotemporal multistage consensus clustering in molecular dynamics studies of large proteins.

    PubMed

    Kenn, Michael; Ribarics, Reiner; Ilieva, Nevena; Cibena, Michael; Karch, Rudolf; Schreiner, Wolfgang

    2016-04-26

    The aim of this work is to find semi-rigid domains within large proteins as reference structures for fitting molecular dynamics trajectories. We propose an algorithm, multistage consensus clustering, MCC, based on minimum variation of distances between pairs of Cα-atoms as target function. The whole dataset (trajectory) is split into sub-segments. For a given sub-segment, spatial clustering is repeatedly started from different random seeds, and we adopt the specific spatial clustering with minimum target function: the process described so far is stage 1 of MCC. Then, in stage 2, the results of spatial clustering are consolidated, to arrive at domains stable over the whole dataset. We found that MCC is robust regarding the choice of parameters and yields relevant information on functional domains of the major histocompatibility complex (MHC) studied in this paper: the α-helices and β-floor of the protein (MHC) proved to be most flexible and did not contribute to clusters of significant size. Three alleles of the MHC, each in complex with ABCD3 peptide and LC13 T-cell receptor (TCR), yielded different patterns of motion. Those alleles causing immunological allo-reactions showed distinct correlations of motion between parts of the peptide, the binding cleft and the complementary determining regions (CDR)-loops of the TCR. Multistage consensus clustering reflected functional differences between MHC alleles and yields a methodological basis to increase sensitivity of functional analyses of bio-molecules. Due to the generality of approach, MCC is prone to lend itself as a potent tool also for the analysis of other kinds of big data.

  14. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  15. 50 CFR 622.208 - Minimum mesh size applicable to rock shrimp off Georgia and Florida.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Minimum mesh size applicable to rock... mesh size applicable to rock shrimp off Georgia and Florida. (a) The minimum mesh size for the cod end of a rock shrimp trawl net in the South Atlantic EEZ off Georgia and Florida is 17/8 inches (4.8 cm...

  16. 50 CFR 622.208 - Minimum mesh size applicable to rock shrimp off Georgia and Florida.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Minimum mesh size applicable to rock... mesh size applicable to rock shrimp off Georgia and Florida. (a) The minimum mesh size for the cod end of a rock shrimp trawl net in the South Atlantic EEZ off Georgia and Florida is 17/8 inches (4.8 cm...

  17. [Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].

    PubMed

    Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae

    Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.

  18. Leaf seal for inner and outer casings of a turbine

    DOEpatents

    Schroder, Mark Stewart; Leach, David

    2002-01-01

    A plurality of arcuate, circumferentially extending leaf seal segments form an annular seal spanning between annular sealing surfaces of inner and outer casings of a turbine. The ends of the adjoining seal segments have circumferential gaps to enable circumferential expansion and contraction of the segments. The end of a first segment includes a tab projecting into a recess of a second end of a second segment. Edges of the tab seal against the sealing surfaces of the inner and outer casings have a narrow clearance with opposed edges of the recess. An overlying cover plate spans the joint. Leakage flow is maintained at a minimum because of the reduced gap between the radially spaced edges of the tab and recess, while the seal segments retain the capacity to expand and contract circumferentially.

  19. Automated segmentation of intraretinal layers from macular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.

    2007-03-01

    Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Qingqing; Li, Huilin; Wang, Tong

    Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less

  1. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  2. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  3. 50 CFR 622.454 - Minimum size limit.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Spiny Lobster Fishery of Puerto Rico and the U.S. Virgin Islands § 622.454 Minimum size limit. (a) The minimum...

  4. 50 CFR 622.454 - Minimum size limit.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Spiny Lobster Fishery of Puerto Rico and the U.S. Virgin Islands § 622.454 Minimum size limit. (a) The minimum...

  5. Dissociation of somatic growth from segmentation drives gigantism in snakes.

    PubMed

    Head, Jason J; David Polly, P

    2007-06-22

    Body size is significantly correlated with number of vertebrae (pleomerism) in multiple vertebrate lineages, indicating that change in number of body segments produced during somitogenesis is an important factor in evolutionary change in body size, but the role of segmentation in the evolution of extreme sizes, including gigantism, has not been examined. We explored the relationship between body size and vertebral count in basal snakes that exhibit gigantism. Boids, pythonids and the typhlopid genera, Typhlops and Rhinotyphlops, possess a positive relationship between body size and vertebral count, confirming the importance of pleomerism; however, giant taxa possessed fewer than expected vertebrae, indicating that a separate process underlies the evolution of gigantism in snakes. The lack of correlation between body size and vertebral number in giant taxa demonstrates dissociation of segment production in early development from somatic growth during maturation, indicating that gigantism is achieved by modifying development at a different stage from that normally selected for changes in body size.

  6. Caudal migration and proliferation of renal progenitors regulates early nephron segment size in zebrafish.

    PubMed

    Naylor, Richard W; Dodd, Rachel C; Davidson, Alan J

    2016-10-19

    The nephron is the functional unit of the kidney and is divided into distinct proximal and distal segments. The factors determining nephron segment size are not fully understood. In zebrafish, the embryonic kidney has long been thought to differentiate in situ into two proximal tubule segments and two distal tubule segments (distal early; DE, and distal late; DL) with little involvement of cell movement. Here, we overturn this notion by performing lineage-labelling experiments that reveal extensive caudal movement of the proximal and DE segments and a concomitant compaction of the DL segment as it fuses with the cloaca. Laser-mediated severing of the tubule, such that the DE and DL are disconnected or that the DL and cloaca do not fuse, results in a reduction in tubule cell proliferation and significantly shortens the DE segment while the caudal movement of the DL is unaffected. These results suggest that the DL mechanically pulls the more proximal segments, thereby driving both their caudal extension and their proliferation. Together, these data provide new insights into early nephron morphogenesis and demonstrate the importance of cell movement and proliferation in determining initial nephron segment size.

  7. 46 CFR 111.60-4 - Minimum cable conductor size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...

  8. 46 CFR 111.60-4 - Minimum cable conductor size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...

  9. 50 CFR 648.75 - Shucking at sea and minimum surfclam size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...

  10. 50 CFR 648.75 - Shucking at sea and minimum surfclam size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...

  11. 50 CFR 648.75 - Shucking at sea and minimum surfclam size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...

  12. 40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...

  13. MRI brain tumor segmentation based on improved fuzzy c-means method

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo

    2009-10-01

    This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.

  14. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  15. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    NASA Astrophysics Data System (ADS)

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  16. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. 7 CFR 51.2952 - Size specifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...

  18. 7 CFR 51.2952 - Size specifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...

  19. 7 CFR 51.1216 - Size requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) The numerical count or a count-size based on equivalent tray pack size designations or the minimum... numerical count is not shown the minimum diameter shall be plainly stamped, stenciled, or otherwise marked...

  20. 7 CFR 51.1216 - Size requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) The numerical count or a count-size based on equivalent tray pack size designations or the minimum... numerical count is not shown the minimum diameter shall be plainly stamped, stenciled, or otherwise marked...

  1. Practical implementation of channelized hotelling observers: effect of ROI size

    NASA Astrophysics Data System (ADS)

    Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.

    2017-03-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.

  2. Practical implementation of Channelized Hotelling Observers: Effect of ROI size.

    PubMed

    Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H

    2017-03-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.

  3. Design and Optimization of the SPOT Primary Mirror Segment

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  4. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  5. 24 CFR 984.105 - Minimum program size.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVELOPMENT SECTION 8 AND PUBLIC HOUSING FAMILY SELF-SUFFICIENCY PROGRAM General § 984.105 Minimum program... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Minimum program size. 984.105 Section 984.105 Housing and Urban Development Regulations Relating to Housing and Urban Development...

  6. Practical implementation of Channelized Hotelling Observers: Effect of ROI size

    PubMed Central

    Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.

    2017-01-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699

  7. An objective approach to determining the weight ranges of prey preferred by and accessible to the five large African carnivores.

    PubMed

    Clements, Hayley S; Tambling, Craig J; Hayward, Matt W; Kerley, Graham I H

    2014-01-01

    Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points) with high rates of detection (75% to 100% of simulations, depending on number of break-points) and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point). When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed "accessible prey". Accessible prey weight ranges were found to be 14-135 kg for cheetah Acinonyx jubatus, 1-45 kg for leopard Panthera pardus, 32-632 kg for lion Panthera leo, 15-1600 kg for spotted hyaena Crocuta crocuta and 10-289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah), 82±3% (leopard), 81±2% (lion), 97±2% (spotted hyaena) and 96±2% (wild dog) of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore species.

  8. Effects of counterion size and backbone rigidity on the dynamics of ionic polymer melts and glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao; Bocharova, Vera; Ma, Mengze

    Backbone rigidity, counterion size and the static dielectric constant affect the glass transition temperature, segmental relaxation time and decoupling between counterion and segmental dynamics in significant manners.

  9. Carbon fiber reinforcements for sheet molding composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozcan, Soydan; Paulauskas, Felix L.

    A method of processing a carbon fiber tow includes the steps of providing a carbon fiber tow made of a plurality of carbon filaments, depositing a sizing composition at spaced-apart sizing sites along a length of the tow, leaving unsized interstitial regions of the tow, and cross-cutting the tow into a plurality of segments. Each segment includes at least a portion of one of the sizing sites and at least a portion of at least one of the unsized regions of the tow, the unsized region including and end portion of the segment.

  10. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  11. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  12. ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology

    NASA Astrophysics Data System (ADS)

    Han, Kwangsoo; Kahng, Andrew B.; Lee, Hyein; Wang, Lutong

    2015-10-01

    Self-aligned multiple patterning (SAMP), due to its low overlay error, has emerged as the leading option for 1D gridded back-end-of-line (BEOL) in sub-14nm nodes. To form actual routing patterns from a uniform "sea of wires", a cut mask is needed for line-end cutting or realization of space between routing segments. Constraints on cut shapes and minimum cut spacing result in end-of-line (EOL) extensions and non-functional (i.e. dummy fill) patterns; the resulting capacitance and timing changes must be consistent with signoff performance analyses and their impacts should be minimized. In this work, we address the co-optimization of cut mask layout, dummy fill, and design timing for sub-14nm BEOL design. Our central contribution is an optimizer based on integer linear programming (ILP) to minimize the timing impact due to EOL extensions, considering (i) minimum cut spacing arising in sub-14nm nodes; (ii) cut assignment to different cut masks (color assignment); and (iii) the eligibility to merge two unit-size cuts into a bigger cut. We also propose a heuristic approach to remove dummy fills after the ILP-based optimization by extending the usage of cut masks. Our heuristic can improve critical path performance under minimum metal density and mask density constraints. In our experiments, we study the impact of number of cut masks, minimum cut spacing and metal density under various constraints. Our studies of optimized cut mask solutions in these varying contexts give new insight into the tradeoff of performance and cost that is afforded by cut mask patterning technology options.

  13. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  14. Muskellunge growth potential in northern Wisconsin: implications for trophy management

    USGS Publications Warehouse

    Faust, Matthew D.; Isermann, Daniel A.; Luehring, Mark A.; Hansen, Michael J.

    2015-01-01

    The growth potential of Muskellunge Esox masquinongy was evaluated by back-calculating growth histories from cleithra removed from 305 fish collected during 1995–2011 to determine whether it was consistent with trophy management goals in northern Wisconsin. Female Muskellunge had a larger mean asymptotic length (49.8 in) than did males (43.4 in). Minimum ultimate size of female Muskellunge (45.0 in) equaled the 45.0-in minimum length limit, but was less than the 50.0-in minimum length limit used on Wisconsin's trophy waters, while the minimum ultimate size of male Muskellunge (34.0 in) was less than the statewide minimum length limit. Minimum reproductive sizes for both sexes were less than Wisconsin's trophy minimum length limits. Mean growth potential of female Muskellunge in northern Wisconsin appears to be sufficient for meeting trophy management objectives and angler expectations. Muskellunge in northern Wisconsin had similar growth potential to those in Ontario populations, but lower growth potential than Minnesota's populations, perhaps because of genetic and environmental differences.

  15. Interspecific geographic range size-body size relationship and the diversification dynamics of Neotropical furnariid birds.

    PubMed

    Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M

    2018-05-01

    Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  16. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  17. Finding the global minimum: a fuzzy end elimination implementation

    NASA Technical Reports Server (NTRS)

    Keller, D. A.; Shibata, M.; Marcus, E.; Ornstein, R. L.; Rein, R.

    1995-01-01

    The 'fuzzy end elimination theorem' (FEE) is a mathematically proven theorem that identifies rotameric states in proteins which are incompatible with the global minimum energy conformation. While implementing the FEE we noticed two different aspects that directly affected the final results at convergence. First, the identification of a single dead-ending rotameric state can trigger a 'domino effect' that initiates the identification of additional rotameric states which become dead-ending. A recursive check for dead-ending rotameric states is therefore necessary every time a dead-ending rotameric state is identified. It is shown that, if the recursive check is omitted, it is possible to miss the identification of some dead-ending rotameric states causing a premature termination of the elimination process. Second, we examined the effects of removing dead-ending rotameric states from further considerations at different moments of time. Two different methods of rotameric state removal were examined for an order dependence. In one case, each rotamer found to be incompatible with the global minimum energy conformation was removed immediately following its identification. In the other, dead-ending rotamers were marked for deletion but retained during the search, so that they influenced the evaluation of other rotameric states. When the search was completed, all marked rotamers were removed simultaneously. In addition, to expand further the usefulness of the FEE, a novel method is presented that allows for further reduction in the remaining set of conformations at the FEE convergence. In this method, called a tree-based search, each dead-ending pair of rotamers which does not lead to the direct removal of either rotameric state is used to reduce significantly the number of remaining conformations. In the future this method can also be expanded to triplet and quadruplet sets of rotameric states. We tested our implementation of the FEE by exhaustively searching ten protein segments and found that the FEE identified the global minimum every time. For each segment, the global minimum was exhaustively searched in two different environments: (i) the segments were extracted from the protein and exhaustively searched in the absence of the surrounding residues; (ii) the segments were exhaustively searched in the presence of the remaining residues fixed at crystal structure conformations. We also evaluated the performance of the method for accurately predicting side chain conformations. We examined the influence of factors such as type and accuracy of backbone template used, and the restrictions imposed by the choice of potential function, parameterization and rotamer database. Conclusions are drawn on these results and future prospects are given.

  18. Assessing the impacts of dams and levees on the hydrologic record of the Middle and Lower Mississippi River, USA

    USGS Publications Warehouse

    Remo, Jonathan W.F.; Ickes, Brian; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.

    2018-01-01

    The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a ~1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.

  19. Assessing the impacts of dams and levees on the hydrologic record of the Middle and Lower Mississippi River, USA

    NASA Astrophysics Data System (ADS)

    Remo, Jonathan W. F.; Ickes, Brian S.; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.

    2018-07-01

    The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a 1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.

  20. Internal Flow Simulation of Enhanced Performance Solid Rocket Booster for the Space Transportation System

    NASA Technical Reports Server (NTRS)

    Ahmad, Rashid A.; McCool, Alex (Technical Monitor)

    2001-01-01

    An enhanced performance solid rocket booster concept for the space shuttle system has been proposed. The concept booster will have strong commonality with the existing, proven, reliable four-segment Space Shuttle Reusable Solid Rocket Motors (RSRM) with individual component design (nozzle, insulator, etc.) optimized for a five-segment configuration. Increased performance is desirable to further enhance safety/reliability and/or increase payload capability. Performance increase will be achieved by adding a fifth propellant segment to the current four-segment booster and opening the throat to accommodate the increased mass flow while maintaining current pressure levels. One development concept under consideration is the static test of a "standard" RSRM with a fifth propellant segment inserted and appropriate minimum motor modifications. Feasibility studies are being conducted to assess the potential for any significant departure in component performance/loading from the well-characterized RSRM. An area of concern is the aft motor (submerged nozzle inlet, aft dome, etc.) where the altered internal flow resulting from the performance enhancing features (25% increase in mass flow rate, higher Mach numbers, modified subsonic nozzle contour) may result in increased component erosion and char. To assess this issue and to define the minimum design changes required to successfully static test a fifth segment RSRM engineering test motor, internal flow studies have been initiated. Internal aero-thermal environments were quantified in terms of conventional convective heating and discrete phase alumina particle impact/concentration and accretion calculations via Computational Fluid Dynamics (CFD) simulation. Two sets of comparative CFD simulations of the RSRM and the five-segment (IBM) concept motor were conducted with CFD commercial code FLUENT. The first simulation involved a two-dimensional axi-symmetric model of the full motor, initial grain RSRM. The second set of analyses included three-dimensional models of the RSRM and FSM aft motors with four-degree vectored nozzles.

  1. Model-Based Wavefront Control for CCAT

    NASA Technical Reports Server (NTRS)

    Redding, David; Lou, John Z.; Kissil, Andy; Bradford, Matt; Padin, Steve; Woody, David

    2011-01-01

    The 25-m aperture CCAT submillimeter-wave telescope will have a primary mirror that is divided into 162 individual segments, each of which is provided with 3 positioning actuators. CCAT will be equipped with innovative Imaging Displacement Sensors (IDS) inexpensive optical edge sensors capable of accurately measuring all segment relative motions. These measurements are used in a Kalman-filter-based Optical State Estimator to estimate wavefront errors, permitting use of a minimum-wavefront controller without direct wavefront measurement. This controller corrects the optical impact of errors in 6 degrees of freedom per segment, including lateral translations of the segments, using only the 3 actuated degrees of freedom per segment. The global motions of the Primary and Secondary Mirrors are not measured by the edge sensors. These are controlled using a gravity-sag look-up table. Predicted performance is illustrated by simulated response to errors such as gravity sag.

  2. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  3. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  4. [Comparison of Quantification of Myocardial Infarct Size by One Breath Hold Single Shot PSIR Sequence and Segmented FLASH-PSIR Sequence at 3. 0 Tesla MR].

    PubMed

    Cheng, Wei; Cai, Shu; Sun, Jia-yu; Xia, Chun-chao; Li, Zhen-lin; Chen, Yu-cheng; Zhong, Yao-zu

    2015-05-01

    To compare the two sequences [single shot true-FISP-PSIR (single shot-PSIR) and segmented-turbo-FLASH-PSIR (segmented-PSIR)] in the value of quantification for myocardial infarct size at 3. 0 tesla MRI. 38 patients with clinical confirmed myocardial infarction were served a comprehensive gadonilium cardiac MRI at 3. 0 tesla MRI system (Trio, Siemens). Myocardial delayed enhancement (MDE) were performed by single shot-PSIR and segmented-PSIR sequences separatedly in 12-20 min followed gadopentetate dimeglumine injection (0. 15 mmol/kg). The quality of MDE images were analysed by experienced physicians. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) between the two techniques were compared. Myocardial infarct size was quantified by a dedicated software automatically (Q-mass, Medis). All objectives were scanned on the 3. 0T MR successfully. No significant difference was found in SNR and CNR of the image quality between the two sequences (P>0. 05), as well as the total myocardial volume, between two sequences (P>0. 05). Furthermore, there were still no difference in the infarct size [single shot-PSIR (30. 87 ± 15. 72) mL, segmented-PSIR (29. 26±14. 07) ml], ratio [single shot-PSIR (22. 94%±10. 94%), segmented-PSIR (20. 75% ± 8. 78%)] between the two sequences (P>0. 05). However, the average aquisition time of single shot-PSIR (21. 4 s) was less than that of the latter (380 s). Single shot-PSIR is equal to segmented-PSIR in detecting the myocardial infarct size with less acquisition time, which is valuable in the clinic application and further research.

  5. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  7. The segment as the minimal planning unit in speech production: evidence based on absolute response latencies.

    PubMed

    Kawamoto, Alan H; Liu, Qiang; Lee, Ria J; Grebe, Patricia R

    2014-01-01

    A minimal amount of information about a word must be phonologically and phonetically encoded before a person can begin to utter that word. Most researchers assume that the minimum is the complete word or possibly the initial syllable. However, there is some evidence that the initial segment is sufficient based on longer durations when the initial segment is primed. In two experiments in which the initial segment of a monosyllabic word is primed or not primed, we present additional evidence based on very short absolute response times determined on the basis of acoustic and articulatory onset relative to presentation of the complete target. We argue that the previous failures to find very short absolute response times when the initial segment is primed are due in part to the exclusive use of acoustic onset as a measure of response latency, the exclusion of responses with very short acoustic latencies, the manner of articulation of the initial segment (i.e., plosive vs. nonplosive), and individual differences. Theoretical implications of the segment as the minimal planning unit are considered.

  8. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  9. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  10. Higher Accuracy of the Lactate Minimum Test Compared to Established Threshold Concepts to Determine Maximal Lactate Steady State in Running.

    PubMed

    Wahl, Patrick; Zwingmann, Lukas; Manunzio, Christian; Wolf, Jacob; Bloch, Wilhelm

    2018-05-18

    This study evaluated the accuracy of the lactate minimum test, in comparison to a graded-exercise test and established threshold concepts (OBLA and mDmax) to determine running speed at maximal lactate steady state. Eighteen subjects performed a lactate minimum test, a graded-exercise test (2.4 m·s -1 start,+0.4 m·s -1 every 5 min) and 2 or more constant-speed tests of 30 min to determine running speed at maximal lactate steady state. The lactate minimum test consisted of an initial lactate priming segment, followed by a short recovery phase. Afterwards, the initial load of the subsequent incremental segment was individually determined and was increased by 0.1 m·s -1 every 120 s. Lactate minimum was determined by the lowest measured value (LM abs ) and by a third-order polynomial (LM pol ). The mean difference to maximal lactate steady state was+0.01±0.14 m·s -1 (LM abs ), 0.04±0.15 m·s -1 (LM pol ), -0.06±0.31 m·s 1 (OBLA) and -0.08±0.21 m·s 1 (mDmax). The intraclass correlation coefficient (ICC) between running velocity at maximal lactate steady state and LM abs was highest (ICC=0.964), followed by LM pol (ICC=0.956), mDmax (ICC=0.916) and OBLA (ICC=0.885). Due to the higher accuracy of the lactate minimum test to determine maximal lactate steady state compared to OBLA and mDmax, we suggest the lactate minimum test as a valid and meaningful concept to estimate running velocity at maximal lactate steady state in a single session for moderately up to well-trained athletes. © Georg Thieme Verlag KG Stuttgart · New York.

  11. A machine-learning graph-based approach for 3D segmentation of Bruch's membrane opening from glaucomatous SD-OCT volumes.

    PubMed

    Miri, Mohammad Saleh; Abràmoff, Michael D; Kwon, Young H; Sonka, Milan; Garvin, Mona K

    2017-07-01

    Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches. Published by Elsevier B.V.

  12. Transforaminal Anterior Release for the Treatment of Fixed Sagittal Imbalance and Segmental Kyphosis, Minimum 2-Year Follow-Up Study.

    PubMed

    Sweet, Fred A; Sweet, Andrea

    2015-09-01

    Retrospective review of prospectively accrued patient cohort. To report minimum 2 years' follow-up after a single-surgeon series of 47 consecutive patients in whom fixed sagittal imbalance or segmental kyphosis was treated with a novel unilateral transforaminal annular release. Fixed sagittal imbalance has been treated most recently with pedicle subtraction osteotomy with great success but is associated with significant blood loss and neurologic risk. Forty-seven consecutive patients with fixed sagittal imbalance (n = 29) or segmental kyphosis (n = 18) were treated by a single surgeon with a single-level transforaminal anterior release (TFAR) to effect an opening wedge correction. Sagittal and coronal correction was performed with in situ rod contouring. An interbody cage was captured in the disc space with rod compression. Radiographic and clinical outcome analysis was performed with a minimum 2-year follow-up (range 2-7.8 years). The average increase in lordosis was 36° (range 24°-56°) in the fixed sagittal deformity group. Coronal corrections averaged 34° (range 18°-48°). The average improvement in plumb line was 13.6 cm. There were four pseudarthroses, one at the TFAR. Average blood loss was 578 mL (range 200-1,200). One patient had a transient grade 4/5 anterior tibialis weakness. There were no vascular injuries or permanent neurologic deficits. There were significant improvements in the Oswestry Disability Index (p < .001) and Scoliosis Research Society Questionnaire scores (p = .003). Eighty-four percent of patients reported improvement in pain, self-image, and satisfaction with the procedure. TFAR is a useful procedure for correcting segmental kyphosis and fixed sagittal imbalance with relatively low blood loss and was found to be neurologically safe in this single-surgeon series. Therapeutic study, Level IV (case series, no control group). Copyright © 2015 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  13. Using maximum entropy to predict suitable habitat for the endangered dwarf wedgemussel in the Maryland Coastal Plain

    USGS Publications Warehouse

    Campbell, Cara; Hilderbrand, Robert H.

    2017-01-01

    Species distribution modelling can be useful for the conservation of rare and endangered species. Freshwater mussel declines have thinned species ranges producing spatially fragmented distributions across large areas. Spatial fragmentation in combination with a complex life history and heterogeneous environment makes predictive modelling difficult.A machine learning approach (maximum entropy) was used to model occurrences and suitable habitat for the federally endangered dwarf wedgemussel, Alasmidonta heterodon, in Maryland's Coastal Plain catchments. Landscape-scale predictors (e.g. land cover, land use, soil characteristics, geology, flow characteristics, and climate) were used to predict the suitability of individual stream segments for A. heterodon.The best model contained variables at three scales: minimum elevation (segment scale), percentage Tertiary deposits, low intensity development, and woody wetlands (sub-catchment), and percentage low intensity development, pasture/hay agriculture, and average depth to the water table (catchment). Despite a very small sample size owing to the rarity of A. heterodon, cross-validated prediction accuracy was 91%.Most predicted suitable segments occur in catchments not known to contain A. heterodon, which provides opportunities for new discoveries or population restoration. These model predictions can guide surveys toward the streams with the best chance of containing the species or, alternatively, away from those streams with little chance of containing A. heterodon.Developed reaches had low predicted suitability for A. heterodon in the Coastal Plain. Urban and exurban sprawl continues to modify stream ecosystems in the region, underscoring the need to preserve existing populations and to discover and protect new populations.

  14. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  15. 3D ultrasound system to investigate intraventricular hemorrhage in preterm neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; de Ribaupierre, S.; Lee, D. S. C.; Mehta, R.; St. Lawrence, K.; Fenster, A.

    2013-11-01

    Intraventricular hemorrhage (IVH) is a common disorder among preterm neonates that is routinely diagnosed and monitored by 2D cranial ultrasound (US). The cerebral ventricles of patients with IVH often have a period of ventricular dilation (ventriculomegaly). This initial increase in ventricle size can either spontaneously resolve, which often shows clinically as a period of stabilization in ventricle size and eventual decline back towards a more normal size, or progressive ventricular dilation that does not stabilize and which may require interventional therapy to reduce symptoms relating to increased intracranial pressure. To improve the characterization of ventricle dilation, we developed a 3D US imaging system that can be used with a conventional clinical US scanner to image the ventricular system of preterm neonates at risk of ventriculomegaly. A motorized transducer housing was designed specifically for hand-held use inside an incubator using a transducer commonly used for cranial 2D US scans. This system was validated using geometric phantoms, US/MRI compatible ventricle volume phantoms, and patient images to determine 3D reconstruction accuracy and inter- and intra-observer volume estimation variability. 3D US geometric reconstruction was found to be accurate with an error of <0.2%. Measured volumes of a US/MRI compatible ventricle-like phantom were within 5% of gold standard water displacement measurements. Intra-class correlation for the three observers was 0.97, showing very high agreement between observers. The coefficient of variation was between 1.8-6.3% for repeated segmentations of the same patient. The minimum detectable difference was calculated to be 0.63 cm3 for a single observer. Results from ANOVA for three observers segmenting three patients of IVH grade II did not show any significant differences (p > 0.05) for the measured ventricle volumes between observers. This 3D US system can reliably produce 3D US images of the neonatal ventricular system. There is the potential to use this system to monitor the progression of ventriculomegaly over time in patients with IVH.

  16. Word Family Size and French-Speaking Children's Segmentation of Existing Compounds

    ERIC Educational Resources Information Center

    Nicoladis, Elena; Krott, Andrea

    2007-01-01

    The family size of the constituents of compound words, or the number of compounds sharing the constituents, affects English-speaking children's compound segmentation. This finding is consistent with a usage-based theory of language acquisition, whereby children learn abstract underlying linguistic structure through their experience with particular…

  17. Segmentation and clustering as complementary sources of information

    NASA Astrophysics Data System (ADS)

    Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.

    2007-03-01

    This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.

  18. Measuring nanometre-scale electric fields in scanning transmission electron microscopy using segmented detectors.

    PubMed

    Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D

    2017-11-01

    Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  20. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  1. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  2. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  3. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  4. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  5. Carotid artery phantom designment and simulation using field II

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yang, Xin; Ding, Mingyue

    2013-10-01

    Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability. Morphology and structure features of carotid plaques are the keys to identify plaques and monitoring the disease. Manually segmentation on the ultrasonic images to get the best-fitted actual size of the carotid plaques based on physicians personal experience, namely "gold standard", is a important step in the study of plaque size. However, it is difficult to qualitatively measure the segmentation error caused by the operator's subjective factors. In order to reduce the subjective factors, and the uncertainty factors of quantification, the experiments in this paper were carried out. In this study, we firstly designed a carotid artery phantom, and then use three different beam-forming algorithms of medical ultrasound to simulate the phantom. Finally obtained plaques areas were analyzed through manual segmentation on simulation images. We could (1) directly evaluate the different beam-forming algorithms for the ultrasound imaging simulation on the effect of carotid artery; (2) also analyze the sensitivity of detection on different size of plaques; (3) indirectly reflect the accuracy of the manual segmentation base on segmentation results the evaluation.

  6. Technical Note: PLASTIMATCH MABS, an open source tool for automatic image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaffino, Paolo; Spadea, Maria Francesca

    Purpose: Multiatlas based segmentation is largely used in many clinical and research applications. Due to its good performances, it has recently been included in some commercial platforms for radiotherapy planning and surgery guidance. Anyway, to date, a software with no restrictions about the anatomical district and image modality is still missing. In this paper we introduce PLASTIMATCH MABS, an open source software that can be used with any image modality for automatic segmentation. Methods: PLASTIMATCH MABS workflow consists of two main parts: (1) an offline phase, where optimal registration and voting parameters are tuned and (2) an online phase, wheremore » a new patient is labeled from scratch by using the same parameters as identified in the former phase. Several registration strategies, as well as different voting criteria can be selected. A flexible atlas selection scheme is also available. To prove the effectiveness of the proposed software across anatomical districts and image modalities, it was tested on two very different scenarios: head and neck (H&N) CT segmentation for radiotherapy application, and magnetic resonance image brain labeling for neuroscience investigation. Results: For the neurological study, minimum dice was equal to 0.76 (investigated structures: left and right caudate, putamen, thalamus, and hippocampus). For head and neck case, minimum dice was 0.42 for the most challenging structures (optic nerves and submandibular glands) and 0.62 for the other ones (mandible, brainstem, and parotid glands). Time required to obtain the labels was compatible with a real clinical workflow (35 and 120 min). Conclusions: The proposed software fills a gap in the multiatlas based segmentation field, since all currently available tools (both for commercial and for research purposes) are restricted to a well specified application. Furthermore, it can be adopted as a platform for exploring MABS parameters and as a reference implementation for comparing against other segmentation algorithms.« less

  7. Unsupervised sputum color image segmentation for lung cancer diagnosis based on a Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sammouda, Rachid; Niki, Noboru; Nishitani, Hiroshi; Nakamura, S.; Mori, Shinichiro

    1997-04-01

    The paper presents a method for automatic segmentation of sputum cells with color images, to develop an efficient algorithm for lung cancer diagnosis based on a Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima with the result of being closer to the global minimum. To increase the accuracy in segmenting the regions of interest, a preclassification technique is used to extract the sputum cell regions within the color image and remove those of the debris cells. The former is then given with the raw image to the input of Hopfield neural network to make a crisp segmentation by assigning each pixel to label such as background, cytoplasm, and nucleus. The proposed technique has yielded correct segmentation of complex scene of sputum prepared by ordinary manual staining method in most of the tested images selected from our database containing thousands of sputum color images.

  8. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  9. Boundary-to-Marker Evidence-Controlled Segmentation and MDL-Based Contour Inference for Overlapping Nuclei.

    PubMed

    Song, Jie; Xiao, Liang; Lian, Zhichao

    2017-03-01

    This paper presents a novel method for automated morphology delineation and analysis of cell nuclei in histopathology images. Combining the initial segmentation information and concavity measurement, the proposed method first segments clusters of nuclei into individual pieces, avoiding segmentation errors introduced by the scale-constrained Laplacian-of-Gaussian filtering. After that a nuclear boundary-to-marker evidence computing is introduced to delineate individual objects after the refined segmentation process. The obtained evidence set is then modeled by the periodic B-splines with the minimum description length principle, which achieves a practical compromise between the complexity of the nuclear structure and its coverage of the fluorescence signal to avoid the underfitting and overfitting results. The algorithm is computationally efficient and has been tested on the synthetic database as well as 45 real histopathology images. By comparing the proposed method with several state-of-the-art methods, experimental results show the superior recognition performance of our method and indicate the potential applications of analyzing the intrinsic features of nuclei morphology.

  10. Grading vascularity from histopathological images based on traveling salesman distance and vessel size

    NASA Astrophysics Data System (ADS)

    Niazi, M. Khalid Khan; Hemminger, Jessica; Kurt, Habibe; Lozanski, Gerard; Gurcan, Metin

    2014-03-01

    Vascularity represents an important element of tissue/tumor microenvironment and is implicated in tumor growth, metastatic potential and resistence to therapy. Small blood vessels can be visualized using immunohistochemical stains specific to vascular cells. However, currently used manual methods to assess vascular density are poorly reproducible and are at best semi quantitative. Computer based quantitative and objective methods to measure microvessel density are urgently needed to better understand and clinically utilize microvascular density information. We propose a new method to quantify vascularity from images of bone marrow biopsies stained for CD34 vascular lining cells protein as a model. The method starts by automatically segmenting the blood vessels by methods of maxlink thresholding and minimum graph cuts. The segmentation is followed by morphological post-processing to reduce blast and small spurious objects from the bone marrow images. To classify the images into one of the four grades, we extracted 20 features from the segmented blood vessel images. These features include first four moments of the distribution of the area of blood vessels, first four moments of the distribution of 1) the edge weights in the minimum spanning tree of the blood vessels, 2) the shortest distance between blood vessels, 3) the homogeneity of the shortest distance (absolute difference in distance between consecutive blood vessels along the shortest path) between blood vessels and 5) blood vessel orientation. The method was tested on 26 bone marrow biopsy images stained with CD34 IHC stain, which were evaluated by three pathologists. The pathologists took part in this study by quantifying blood vessel density using gestalt assessment in hematopoietic bone marrow portions of bone marrow core biopsies images. To determine the intra-reader variability, each image was graded twice by each pathologist with two-week interval in between their readings. For each image, the ground truth (grade) was acquired through consensus among the three pathologists at the end of the study. A ranking of the features reveals that the fourth moment of the distribution of the area of blood vessels along with the first moment of the distribution of the shortest distance between blood vessels can correctly grade 68.2% of the bone marrow biopsies, while the intra- and inter-reader variability among the pathologists are 66.9% and 40.0%, respectively.

  11. Radiographic Response to Yttrium-90 Radioembolization in Anterior Versus Posterior Liver Segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Saad M.; Lewandowski, Robert J.; Ryu, Robert K.

    2008-11-15

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria.more » Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm{sup 3}, respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.« less

  12. Radiographic response to yttrium-90 radioembolization in anterior versus posterior liver segments.

    PubMed

    Ibrahim, Saad M; Lewandowski, Robert J; Ryu, Robert K; Sato, Kent T; Gates, Vanessa L; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Omary, Reed A; Salem, Riad

    2008-01-01

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria. Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm(3), respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.

  13. Minding the Gaps: Literacy Enhances Lexical Segmentation in Children Learning to Read

    ERIC Educational Resources Information Center

    Havron, Naomi; Arnon, Inbal

    2017-01-01

    Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for…

  14. Discovery of Influenza A Virus Sequence Pairs and Their Combinations for Simultaneous Heterosubtypic Targeting that Hedge against Antiviral Resistance

    PubMed Central

    Lin, Jing; Pramono, Zacharias Aloysius Dwi; Maurer-Stroh, Sebastian

    2016-01-01

    The multiple circulating human influenza A virus subtypes coupled with the perpetual genomic mutations and segment reassortment events challenge the development of effective therapeutics. The capacity to drug most RNAs motivates the investigation on viral RNA targets. 123,060 segment sequences from 35,938 strains of the most prevalent subtypes also infecting humans–H1N1, 2009 pandemic H1N1, H3N2, H5N1 and H7N9, were used to identify 1,183 conserved RNA target sequences (≥15-mer) in the internal segments. 100% theoretical coverage in simultaneous heterosubtypic targeting is achieved by pairing specific sequences from the same segment (“Duals”) or from two segments (“Doubles”); 1,662 Duals and 28,463 Doubles identified. By combining specific Duals and/or Doubles to form a target graph wherein an edge connecting two vertices (target sequences) represents a Dual or Double, it is possible to hedge against antiviral resistance besides maintaining 100% heterosubtypic coverage. To evaluate the hedging potential, we define the hedge-factor as the minimum number of resistant target sequences that will render the graph to become resistant i.e. eliminate all the edges therein; a target sequence or a graph is considered resistant when it cannot achieve 100% heterosubtypic coverage. In an n-vertices graph (n ≥ 3), the hedge-factor is maximal (= n– 1) when it is a complete graph i.e. every distinct pair in a graph is either a Dual or Double. Computational analyses uncover an extensive number of complete graphs of different sizes. Monte Carlo simulations show that the mutation counts and time elapsed for a target graph to become resistant increase with the hedge-factor. Incidentally, target sequences which were reported to reduce virus titre in experiments are included in our target graphs. The identity of target sequence pairs for heterosubtypic targeting and their combinations for hedging antiviral resistance are useful toolkits to construct target graphs for different therapeutic objectives. PMID:26771381

  15. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    NASA Astrophysics Data System (ADS)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  16. An Objective Approach to Determining the Weight Ranges of Prey Preferred by and Accessible to the Five Large African Carnivores

    PubMed Central

    Clements, Hayley S.; Tambling, Craig J.; Hayward, Matt W.; Kerley, Graham I. H.

    2014-01-01

    Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points) with high rates of detection (75% to 100% of simulations, depending on number of break-points) and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point). When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed “accessible prey”. Accessible prey weight ranges were found to be 14–135 kg for cheetah Acinonyx jubatus, 1–45 kg for leopard Panthera pardus, 32–632 kg for lion Panthera leo, 15–1600 kg for spotted hyaena Crocuta crocuta and 10–289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah), 82±3% (leopard), 81±2% (lion), 97±2% (spotted hyaena) and 96±2% (wild dog) of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore species. PMID:24988433

  17. Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.

    NASA Astrophysics Data System (ADS)

    Hur, Sun Hae

    The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.

  18. Methodology and results of calculating central California surface temperature trends: Evidence of human-induced climate change?

    USGS Publications Warehouse

    Christy, J.R.; Norris, W.B.; Redmond, K.; Gallo, K.P.

    2006-01-01

    A procedure is described to construct time series of regional surface temperatures and is then applied to interior central California stations to test the hypothesis that century-scale trend differences between irrigated and nonirrigated regions may be identified. The procedure requires documentation of every point in time at which a discontinuity in a station record may have occurred through (a) the examination of metadata forms (e.g., station moves) and (b) simple statistical tests. From this "homogeneous segments" of temperature records for each station are defined. Biases are determined for each segment relative to all others through a method employing mathematical graph theory. The debiased segments are then merged, forming a complete regional time series. Time series of daily maximum and minimum temperatures for stations in the irrigated San Joaquin Valley (Valley) and nearby nonirrigated Sierra Nevada (Sierra) were generated for 1910-2003. Results show that twentieth-century Valley minimum temperatures are warming at a highly significant rate in all seasons, being greatest in summer and fall (> +0.25??C decade-1). The Valley trend of annual mean temperatures is +0.07?? ?? 0.07??C decade-1. Sierra summer and fall minimum temperatures appear to be cooling, but at a less significant rate, while the trend of annual mean Sierra temperatures is an unremarkable -0.02?? ?? 0.10??C decade-1. A working hypothesis is that the relative positive trends in Valley minus Sierra minima (>0.4??C decade-1 for summer and fall) are related to the altered surface environment brought about by the growth of irrigated agriculture, essentially changing a high-albedo desert into a darker, moister, vegetated plain. ?? 2006 American Meteorological Society.

  19. Segmentation of touching handwritten Japanese characters using the graph theory method

    NASA Astrophysics Data System (ADS)

    Suwa, Misako

    2000-12-01

    Projection analysis methods have been widely used to segment Japanese character strings. However, if adjacent characters have overhanging strokes or a touching point doesn't correspond to the histogram minimum, the methods are prone to result in errors. In contrast, non-projection analysis methods being proposed for use on numerals or alphabet characters cannot be simply applied for Japanese characters because of the differences in the structure of the characters. Based on the oversegmenting strategy, a new pre-segmentation method is presented in this paper: touching patterns are represented as graphs and touching strokes are regarded as the elements of proper edge cutsets. By using the graph theoretical technique, the cutset martrix is calculated. Then, by applying pruning rules, potential touching strokes are determined and the patterns are over segmented. Moreover, this algorithm was confirmed to be valid for touching patterns with overhanging strokes and doubly connected patterns in simulations.

  20. Flight evaluation of two segment approaches for jet transport noise abatement

    NASA Technical Reports Server (NTRS)

    Rogers, R. A.; Wohl, B.; Gale, C. M.

    1973-01-01

    A 75 flight-hour operational evaluation was conducted with a representative four-engine fan-jet transport in a representative airport environment. The flight instrument systems were modified to automatically provide pilots with smooth and continuous pitch steering command information during two-segment approaches. Considering adverse weather, minimum ceiling and flight crew experience criteria, a transition initiation altitude of approximately 800 feet AFL would have broadest acceptance for initiating two-segment approach procedures in scheduled service. The profile defined by the system gave an upper glidepath of approximately 6 1/2 degrees. This was 1/2 degree greater than inserted into the area navigation system. The glidepath error is apparently due to an erroneous along-track, distance-to-altitude profile.

  1. The minimum or natural rate of flow and droplet size ejected by Taylor cone-jets: physical symmetries and scaling laws

    NASA Astrophysics Data System (ADS)

    Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.

    2013-03-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.

  2. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds.

    PubMed

    Kilbourne, Brandon M

    2014-01-01

    In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses - thigh, shank, pes, tarsometatarsal segment, and digits - from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel's λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel's λ) and increasing or decreasing rates of trait change over time (i.e., Pagel's δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so.

  3. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds

    PubMed Central

    2014-01-01

    Introduction In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses – thigh, shank, pes, tarsometatarsal segment, and digits – from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel’s λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. Results All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel’s λ) and increasing or decreasing rates of trait change over time (i.e., Pagel’s δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. Conclusions The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so. PMID:24876886

  4. 49 CFR 192.921 - How is the baseline assessment to be conducted?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas... the covered pipeline segments for the baseline assessment according to a risk analysis that considers...

  5. 49 CFR 192.179 - Transmission line valves.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Design of Pipeline Components § 192.179 Transmission line valves. (a) Each transmission line, other than offshore segments, must have sectionalizing block... 49 Transportation 3 2010-10-01 2010-10-01 false Transmission line valves. 192.179 Section 192.179...

  6. 49 CFR 192.179 - Transmission line valves.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Design of Pipeline Components § 192.179 Transmission line valves. (a) Each transmission line, other than offshore segments, must have sectionalizing block... 49 Transportation 3 2013-10-01 2013-10-01 false Transmission line valves. 192.179 Section 192.179...

  7. 49 CFR 192.179 - Transmission line valves.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Design of Pipeline Components § 192.179 Transmission line valves. (a) Each transmission line, other than offshore segments, must have sectionalizing block... 49 Transportation 3 2011-10-01 2011-10-01 false Transmission line valves. 192.179 Section 192.179...

  8. 49 CFR 192.179 - Transmission line valves.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Design of Pipeline Components § 192.179 Transmission line valves. (a) Each transmission line, other than offshore segments, must have sectionalizing block... 49 Transportation 3 2014-10-01 2014-10-01 false Transmission line valves. 192.179 Section 192.179...

  9. 49 CFR 192.179 - Transmission line valves.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Design of Pipeline Components § 192.179 Transmission line valves. (a) Each transmission line, other than offshore segments, must have sectionalizing block... 49 Transportation 3 2012-10-01 2012-10-01 false Transmission line valves. 192.179 Section 192.179...

  10. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    PubMed Central

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates. PMID:12761696

  11. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    PubMed

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  12. Genetics of hybrid male sterility between drosophila sibling species: a complex web of epistasis is revealed in interspecific studies.

    PubMed

    Palopoli, M F; Wu, C I

    1994-10-01

    To study the genetic differences responsible for the sterility of their male hybrids, we introgressed small segments of an X chromosome from Drosophila simulans into a pure Drosophila mauritiana genetic background, then assessed the fertility of males carrying heterospecific introgressions of varying size. Although this analysis examined less than 20% of the X chromosome (roughly 5% of the euchromatic portion of the D. simulans genome), and the segments were introgressed in only one direction, a minimum of four factors that contribute to hybrid male sterility were revealed. At least two of the factors exhibited strong epistasis: males carrying either factor alone were consistently fertile, whereas males carrying both factors together were always sterile. Distinct spermatogenic phenotypes were observed for sterile introgressions of different lengths, and it appeared that an interaction between introgressed segments also influenced the stage of spermatogenic defect. Males with one category of introgression often produced large quantities of motile sperm and were observed copulating, but never inseminated females. Evidently these two species have diverged at a large number of loci which have varied effects on hybrid male fertility. By extrapolation, we estimate that there are at least 40 such loci on the X chromosome alone. Because these species exhibit little DNA-sequence divergence at arbitrarily chosen loci, it seems unlikely that the extensive functional divergence observed could be due mainly to random genetic drift. Significant epistasis between conspecific genes appears to be a common component of hybrid sterility between recently diverged species of Drosophila. The linkage relationships of interacting factors could shed light on the role played by epistatic selection in the dynamics of the allele substitutions responsible for reproductive barriers between species.

  13. Genetics of Hybrid Male Sterility between Drosophila Sibling Species: A Complex Web of Epistasis Is Revealed in Interspecific Studies

    PubMed Central

    Palopoli, M. F.; Wu, C. I.

    1994-01-01

    To study the genetic differences responsible for the sterility of their male hybrids, we introgressed small segments of an X chromosome from Drosophila simulans into a pure Drosophila mauritiana genetic background, then assessed the fertility of males carrying heterospecific introgressions of varying size. Although this analysis examined less than 20% of the X chromosome (roughly 5% of the euchromatic portion of the D. simulans genome), and the segments were introgressed in only one direction, a minimum of four factors that contribute to hybrid male sterility were revealed. At least two of the factors exhibited strong epistasis: males carrying either factor alone were consistently fertile, whereas males carrying both factors together were always sterile. Distinct spermatogenic phenotypes were observed for sterile introgressions of different lengths, and it appeared that an interaction between introgressed segments also influenced the stage of spermatogenic defect. Males with one category of introgression often produced large quantities of motile sperm and were observed copulating, but never inseminated females. Evidently these two species have diverged at a large number of loci which have varied effects on hybrid male fertility. By extrapolation, we estimate that there are at least 40 such loci on the X chromosome alone. Because these species exhibit little DNA-sequence divergence at arbitrarily chosen loci, it seems unlikely that the extensive functional divergence observed could be due mainly to random genetic drift. Significant epistasis between conspecific genes appears to be a common component of hybrid sterility between recently diverged species of Drosophila. The linkage relationships of interacting factors could shed light on the role played by epistatic selection in the dynamics of the allele substitutions responsible for reproductive barriers between species. PMID:7828817

  14. Fuzzy pulmonary vessel segmentation in contrast enhanced CT data

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til

    2008-03-01

    Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.

  15. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Treesearch

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  16. 50 CFR 622.50 - Caribbean spiny lobster import prohibitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Caribbean spiny lobster import... ATLANTIC Management Measures § 622.50 Caribbean spiny lobster import prohibitions. (a) Minimum size limits for imported spiny lobster. There are two minimum size limits that apply to importation of spiny...

  17. 50 CFR 622.50 - Caribbean spiny lobster import prohibitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Caribbean spiny lobster import... ATLANTIC Management Measures § 622.50 Caribbean spiny lobster import prohibitions. (a) Minimum size limits for imported spiny lobster. There are two minimum size limits that apply to importation of spiny...

  18. 50 CFR 648.72 - Minimum surf clam size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Atlantic Surf Clam and Ocean Quahog Fisheries § 648.72 Minimum surf clam size. Link to an amendment... quahog specifications. (a) Establishing catch quotas. The amount of surfclams or ocean quahogs that may... paragraph (b) of this section. The amount of surfclams available for harvest annually must be specified...

  19. Relation between inflammables and ignition sources in aircraft environments

    NASA Technical Reports Server (NTRS)

    Scull, Wilfred E

    1951-01-01

    A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and discussed. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings through which flame will not propagate are presented and discussed. Ignition temperatures and limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressures and minimum size of opening for flame propagation in gasoline-air mixtures are included; inerting of gasoline-air mixtures is discussed.

  20. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  1. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  2. 49 CFR 192.485 - Remedial measures: Transmission lines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.485 Remedial measures: Transmission lines. (a) General corrosion. Each segment of transmission line with general corrosion and with a remaining wall thickness less than that required for the...

  3. 49 CFR 192.485 - Remedial measures: Transmission lines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.485 Remedial measures: Transmission lines. (a) General corrosion. Each segment of transmission line with general corrosion and with a remaining wall thickness less than that required for the...

  4. 49 CFR 192.485 - Remedial measures: Transmission lines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.485 Remedial measures: Transmission lines. (a) General corrosion. Each segment of transmission line with general corrosion and with a remaining wall thickness less than that required for the...

  5. 49 CFR 192.485 - Remedial measures: Transmission lines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Requirements for Corrosion Control § 192.485 Remedial measures: Transmission lines. (a) General corrosion. Each segment of transmission line with general corrosion and with a remaining wall thickness less than that required for the...

  6. 48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...

  7. 48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...

  8. 48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...

  9. 48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...

  10. Dynamics of Trees of Fragmenting Granules in the Quiet Sun: Hinode/SOT Observations Compared to Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Malherbe, J.-M.; Roudier, T.; Stein, R.; Frank, Z.

    2018-01-01

    We compare horizontal velocities, vertical magnetic fields, and the evolution of trees of fragmenting granules (TFG, also named families of granules) derived in the quiet Sun at disk center from observations at solar minimum and maximum of the Solar Optical Telescope (SOT on board Hinode) and results of a recent 3D numerical simulation of the magneto-convection. We used 24-hour sequences of a 2D field of view (FOV) with high spatial and temporal resolution recorded by the SOT Broad band Filter Imager (BFI) and Narrow band Filter Imager (NFI). TFG were evidenced by segmentation and labeling of continuum intensities. Horizontal velocities were obtained from local correlation tracking (LCT) of proper motions of granules. Stokes V provided a proxy of the line-of-sight magnetic field (BLOS). The MHD simulation (performed independently) produced granulation intensities, velocity, and magnetic field vectors. We discovered that TFG also form in the simulation and show that it is able to reproduce the main properties of solar TFG: lifetime and size, associated horizontal motions, corks, and diffusive index are close to observations. The largest (but not numerous) families are related in both cases to the strongest flows and could play a major role in supergranule and magnetic network formation. We found that observations do not reveal any significant variation in TFG between solar minimum and maximum.

  11. Size of the Dynamic Bead in Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agapov, Alexander L; Sokolov, Alexei P

    2010-01-01

    Presented analysis of neutron, mechanical, and MD simulation data available in the literature demonstrates that the dynamic bead size (the smallest subchain that still exhibits the Rouse-like dynamics) in most of the polymers is significantly larger than the traditionally defined Kuhn segment. Moreover, our analysis emphasizes that even the static bead size (e.g., chain statistics) disagrees with the Kuhn segment length. We demonstrate that the deficiency of the Kuhn segment definition is based on the assumption of a chain being completely extended inside a single bead. The analysis suggests that representation of a real polymer chain by the bead-and-spring modelmore » with a single parameter C cannot be correct. One needs more parameters to reflect correctly details of the chain structure in the bead-and-spring model.« less

  12. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images.

    PubMed

    Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M

    2017-05-01

    To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.

  13. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  14. 46 CFR 111.60-4 - Minimum cable conductor size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...

  15. 46 CFR 111.60-4 - Minimum cable conductor size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...

  16. 46 CFR 111.60-4 - Minimum cable conductor size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...

  17. 7 CFR 51.1995 - U.S. No. 1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Well formed; and, (2) Clean and bright. (3) Free from: (i) Blanks; and, (ii) Broken or split shells. (4... minimum diameter, minimum and maximum diameters, or in accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will pass through a round opening of the following...

  18. 78 FR 52079 - Oranges, Grapefruit, Tangerines, and Tangelos Grown in Florida; Relaxing Size and Grade...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-22

    ... Requirements on Valencia and Other Late Type Oranges AGENCY: Agricultural Marketing Service, USDA. ACTION...). The interim rule reduced the minimum size for Valencia and other late type oranges shipped to... interim rule also lowered the minimum grade for Valencia and other late type oranges shipped to interstate...

  19. 78 FR 28115 - Oranges, Grapefruit, Tangerines, and Tangelos Grown in Florida; Relaxing Size and Grade...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... Requirements on Valencia and Other Late Type Oranges AGENCY: Agricultural Marketing Service, USDA. ACTION...). This rule reduces the minimum size requirement for Valencia and other late type oranges shipped to... also reduces the minimum grade requirement for Valencia and other late type oranges shipped to...

  20. 76 FR 37867 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Order Approving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ..., as Modified by Amendment No. 1, to Reduce the Minimum Size of the Nominating and Governance Committee... proposed rule change to reduce the minimum size of the Nominating and Governance Committee (``NGC'') from... the original proposed rule change, it had not yet obtained formal approval from its Board of Directors...

  1. 7 CFR 51.2113 - Size requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of range in count of whole almond kernels per ounce or in terms of minimum, or minimum and maximum diameter. When a range in count is specified, the whole kernels shall be fairly uniform in size, and the average count per ounce shall be within the range specified. Doubles and broken kernels shall not be used...

  2. Vessel network detection using contour evolution and color components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Medeiros, Fatima; Cuadros, Jorge

    2011-06-22

    Automated retinal screening relies on vasculature segmentation before the identification of other anatomical structures of the retina. Vasculature extraction can also be input to image quality ranking, neovascularization detection and image registration, among other applications. There is an extensive literature related to this problem, often excluding the inherent heterogeneity of ophthalmic clinical images. The contribution of this paper relies on an algorithm using front propagation to segment the vessel network. The algorithm includes a penalty in the wait queue on the fast marching heap to minimize leakage of the evolving interface. The method requires no manual labeling, a minimum numbermore » of parameters and it is capable of segmenting color ocular fundus images in real scenarios, where multi-ethnicity and brightness variations are parts of the problem.« less

  3. Robust tissue-air volume segmentation of MR images based on the statistics of phase and magnitude: Its applications in the display of susceptibility-weighted imaging of the brain.

    PubMed

    Du, Yiping P; Jin, Zhaoyang

    2009-10-01

    To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.

  4. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast-iron pipelines. When an operator has knowledge that the support for a segment of a buried cast-iron...

  5. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  6. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  7. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  8. 49 CFR 192.755 - Protecting cast-iron pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Protecting cast-iron pipelines. 192.755 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Maintenance § 192.755 Protecting cast... pipeline is disturbed: (a) That segment of the pipeline must be protected, as necessary, against damage...

  9. Engine Hydraulic Stability. [injector model for analyzing combustion instability

    NASA Technical Reports Server (NTRS)

    Kesselring, R. C.; Sprouse, K. M.

    1977-01-01

    An analytical injector model was developed specifically to analyze combustion instability coupling between the injector hydraulics and the combustion process. This digital computer dynamic injector model will, for any imposed chamber of inlet pressure profile with a frequency ranging from 100 to 3000 Hz (minimum) accurately predict/calculate the instantaneous injector flowrates. The injector system is described in terms of which flow segments enter and leave each pressure node. For each flow segment, a resistance, line lengths, and areas are required as inputs (the line lengths and areas are used in determining inertance). For each pressure node, volume and acoustic velocity are required as inputs (volume and acoustic velocity determine capacitance). The geometric criteria for determining inertances of flow segments and capacitance of pressure nodes was set. Also, a technique was developed for analytically determining time averaged steady-state pressure drops and flowrates for every flow segment in an injector when such data is not known. These pressure drops and flowrates are then used in determining the linearized flow resistance for each line segment of flow.

  10. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  11. A flight investigation with a STOL airplane flying curved, descending instrument approach paths

    NASA Technical Reports Server (NTRS)

    Benner, M. S.; Mclaughlin, M. D.; Sawyer, R. H.; Vangunst, R.; Ryan, J. L.

    1974-01-01

    A flight investigation using a De Havilland Twin Otter airplane was conducted to determine the configurations of curved, 6 deg descending approach paths which would provide minimum airspace usage within the requirements for acceptable commercial STOL airplane operations. Path configurations with turns of 90 deg, 135 deg, and 180 deg were studied; the approach airspeed was 75 knots. The length of the segment prior to turn, the turn radius, and the length of the final approach segment were varied. The relationship of the acceptable path configurations to the proposed microwave landing system azimuth coverage requirements was examined.

  12. Recommended data sets, corn segments and spring wheat segments, for use in program development

    NASA Technical Reports Server (NTRS)

    Austin, W. W. (Principal Investigator)

    1981-01-01

    The sets of Large Area Crop Inventory Experiment sites, crop year 1978, which are recommended for use in the development and evaluation of classification techniques based on LANDSAT spectral data are presented. For each site, the following exists: (1) accuracy assessment digitized ground truth; (2) a minimum of 5 percent of the scene ground truth identified as corn or spring wheat; and (3) at least four acquisitions of acceptable data quality during the growing season of the crop of interest. The recommended data sets consist of 41 corn/soybean sites and 17 spring wheat sites.

  13. ARV Re-Entry Module Aerodynmics And Aerothermodynamics

    NASA Astrophysics Data System (ADS)

    Scheer, Heloise; Tran, Philippe; Berthe, Philippe

    2011-05-01

    Astrium-ST is the prime contractor of ARV phase A and is especially in charge of designing the Reentry Module (RM). The RM aeroshape has been defined following a trade-off. High level system requirements were derived with particular attention paid on minimum lift-over-drag ratio, trim incidence, centre-of-gravity lateral off-set and box size, volumetric efficiency, attitude at parachute deployment, flight heritage and aeroheating. Since moderate cross-range and thus L/D ratio were required, the aeroshape trade-off has been performed among blunt capsule candidates. Two front- shield families were considered: spherical (Apollo/ARD/Soyuz type) and sphero-conical (CTV type) segment front-shield. The rear-cone angle was set to 20° for internal pressurized volume and accommodation purposes. Figures of merit were assessed and a spherical front- shield of ARD type with a 20° rear-cone section was selected and proposed for further investigations. Maximum benefits will be taken from ARD flight heritage. CFD and WTT campaigns plans will be presented including preliminary results.

  14. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  15. A comparison of Frequency Domain Multiple Access (FDMA) and Time Domain Multiple Access (TDMA) approaches to satellite service for low data rate Earth stations

    NASA Technical Reports Server (NTRS)

    Stevens, G.

    1983-01-01

    A technological and economic assessment is made of providing low data rate service to small earth stations by satellite at Ka-band. Various Frequency Domain Multiple Access (FDMA) and Time Domain Multiple Access (TDMA) scenarios are examined and compared on the basis of cost to the end user. Very small stations (1 to 2 meters in diameter) are found not to be viable alternatives to available terrestrial services. However, medium size (3 to 5 meters) earth stations appear to be very competitive if a minimum throughput of about 1.5 Mbs is maintained. This constrains the use of such terminals to large users and shared use by smaller users. No advantage was found to the use of FDMA. TDMA had a slight advantage from a total system viewpoint and a very significant advantage in the space segment (about 1/3 the required payload weight for an equivalent capacity).

  16. Trophy Hunting and Sustainability: Temporal Dynamics in Trophy Quality and Harvesting Patterns of Wild Herbivores in a Tropical Semi-Arid Savanna Ecosystem.

    PubMed

    Muposhi, Victor K; Gandiwa, Edson; Bartels, Paul; Makuza, Stanley M; Madiri, Tinaapi H

    2016-01-01

    The selective nature of trophy hunting may cause changes in desirable phenotypic traits in harvested species. A decline in trophy size of preferred species may reduce hunting destination competitiveness thus compromising the sustainability of trophy hunting as a conservation tool. We explored the trophy quality and trends in harvesting patterns (i.e., 2004-2015) of Cape buffalo (Syncerus caffer), African elephant (Loxodonta africana), greater kudu (Tragelaphus strepsiceros) and sable (Hippotragus niger) in Matetsi Safari Area, northwest Zimbabwe. We used long-term data on horn and tusk size, age, quota size allocation and offtake levels of selected species. To analyse the effect of year, area and age on the trophy size, quota size and offtake levels, we used linear mixed models. One sample t-test was used to compare observed trophy size with Safari Club International (SCI) minimum score. Trophy sizes for Cape buffalo and African elephant were below the SCI minimum score. Greater kudu trophy sizes were within the minimum score threshold whereas sable trophy sizes were above the SCI minimum score between 2004 and 2015. Age at harvest for Cape buffalo, kudu and sable increased whilst that of elephant remained constant between 2004 and 2015. Quota size allocated for buffalo and the corresponding offtake levels declined over time. Offtake levels of African elephant and Greater kudu declined whilst the quota size did not change between 2004 and 2015. The quota size for sable increased whilst the offtake levels fluctuated without changing for the period 2004-2015. The trophy size and harvesting patterns in these species pose a conservation and management dilemma on the sustainability of trophy hunting in this area. We recommend: (1) temporal and spatial rotational resting of hunting areas to create refuge to improve trophy quality and maintenance of genetic diversity, and (2) introduction of variable trophy fee pricing system based on trophy size.

  17. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    PubMed

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  18. Relation Between Inflammables and Ignition Sources in Aircraft Environments

    NASA Technical Reports Server (NTRS)

    Scull, Wilfred E

    1950-01-01

    A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and, discussed herein. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, and minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings incapable of flame propagation are presented and discussed. The ignition temperatures and the limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressure and the minimum size of openings for flame propagation of gasoline - air mixtures are included. Inerting of gasoline - air mixtures is discussed.

  19. Biodiversity and body size are linked across metazoans

    PubMed Central

    McClain, Craig R.; Boyer, Alison G.

    2009-01-01

    Body size variation across the Metazoa is immense, encompassing 17 orders of magnitude in biovolume. Factors driving this extreme diversification in size and the consequences of size variation for biological processes remain poorly resolved. Species diversity is invoked as both a predictor and a result of size variation, and theory predicts a strong correlation between the two. However, evidence has been presented both supporting and contradicting such a relationship. Here, we use a new comprehensive dataset for maximum and minimum body sizes across all metazoan phyla to show that species diversity is strongly correlated with minimum size, maximum size and consequently intra-phylum variation. Similar patterns are also observed within birds and mammals. The observations point to several fundamental linkages between species diversification and body size variation through the evolution of animal life. PMID:19324730

  20. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  1. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis

    PubMed Central

    Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.

    2015-01-01

    Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600

  2. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis.

    PubMed

    Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T

    2015-12-01

    To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Segmental Isotopic Labeling of Proteins for Nuclear Magnetic Resonance

    PubMed Central

    Dongsheng, Liu; Xu, Rong; Cowburn, David

    2009-01-01

    Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as one of the principle techniques of structural biology. It is not only a powerful method for elucidating the 3D structures under near physiological conditions, but also a convenient method for studying protein-ligand interactions and protein dynamics. A major drawback of macromolecular NMR is its size limitation caused by slower tumbling rates and greater complexity of the spectra as size increases. Segmental isotopic labeling allows specific segment(s) within a protein to be selectively examined by NMR thus significantly reducing the spectral complexity for large proteins and allowing a variety of solution-based NMR strategies to be applied. Two related approaches are generally used in the segmental isotopic labeling of proteins: expressed protein ligation and protein trans-splicing. Here we describe the methodology and recent application of expressed protein ligation and protein trans-splicing for NMR structural studies of proteins and protein complexes. We also describe the protocol used in our lab for the segmental isotopic labeling of a 50 kDa protein Csk (C-terminal Src Kinase) using expressed protein ligation methods. PMID:19632474

  4. Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.

    2005-04-01

    Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.

  5. The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule

    NASA Astrophysics Data System (ADS)

    Eskandari, M. R.; Faghihi, F.; Mahdavi, M.

    The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.

  6. Minding the gaps: literacy enhances lexical segmentation in children learning to read.

    PubMed

    Havron, Naomi; Arnon, Inbal

    2017-11-01

    Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for age, cognitive, and linguistic variables), and that improvement in literacy skills between the two sessions predicted improvement in segmentation abilities. Together, these findings suggest that literacy acquisition, rather than age, enhanced segmentation. We discuss implications for models of language learning.

  7. Antimicrobial flavonoids from Tridax procumbens.

    PubMed

    Jindal, Alka; Kumar, Padma

    2012-01-01

    Callus culture of Tridax procumbens has been established on Murashige and Skoog's medium supplemented with NAA and BAP from nodal segments. Free and bound flavonoids were extracted from 2, 4, 6 and 8 weeks old calli by a well-established method. These free flavonoids were then screened against Staphylococcus aureus (bacteria) and Candida albicans (yeast) for their antimicrobial potential. Minimum inhibitory concentration, minimum bactericidal/fungicidal concentrations and total activity were also evaluated. Apigenin, quercetin and kaempferol were identified from free flavonoids of 4 weeks old callus (most active) through, thin layer chromatography, (TLC) preparative TLC, MP and IR spectral studies.

  8. A Microfabricated Segmented-Involute-Foil Regenerator for Enhancing Reliability and Performance of Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Danila, Daniel; Simon, Terrence; Mantell, Susan; Sun, Liyong; Gadeon, David; Qiu, Songgang; Wood, Gary; Kelly, Kevin; McLean, Jeffrey

    2007-01-01

    An actual-size microfabricated regenerator comprised of a stack of 42 disks, 19 mm diameter and 0.25 mm thick, with layers of microscopic, segmented, involute-shaped flow channels was fabricated and tested. The geometry resembles layers of uniformly-spaced segmented-parallel-plates, except the plates are curved. Each disk was made from electro-plated nickel using the LiGA process. This regenerator had feature sizes close to those required for an actual Stirling engine but the overall regenerator dimensions were sized for the NASA/Sunpower oscillating-flow regenerator test rig. Testing in the oscillating-flow test rig showed the regenerator performed extremely well, significantly better than currently used random-fiber material, producing the highest figures of merit ever recorded for any regenerator tested in that rig over its approximately 20 years of use.

  9. 77 FR 23770 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-20

    ... to Proposed Rule Change To Amend FINRA Rule 6433 (Minimum Quotation Size Requirements for OTC Equity... proposed rule change to amend FINRA Rule 6433 (Minimum Quotation Size Requirements for OTC Equity... investors, three from an inter-dealer quotation system and two from a member firm.\\4\\ FINRA responded to...

  10. 50 CFR 648.104 - Summer flounder minimum fish sizes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....99 cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...

  11. 50 CFR 648.104 - Summer flounder minimum fish sizes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and charter... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...

  12. 50 CFR 648.104 - Summer flounder minimum fish sizes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and charter... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...

  13. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    NASA Astrophysics Data System (ADS)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  14. SRM attrition rate study of the aft motor case segments due to water impact cavity collapse loading

    NASA Technical Reports Server (NTRS)

    Crockett, C. D.

    1976-01-01

    The attrition assessment of the aft segments of Solid Rocket Motor due to water impact requires the establishment of a correlation between loading occurrences and structural capability. Each discrete load case, as identified by the water impact velocities and angle, varies longitudinally and radially in magnitude and distribution of the external pressure. The distributions are further required to be shifted forward or aft one-fourth the vehicle diameter to assure minimization of the effect of test instrumentation location for the load determinations. The asymmetrical load distributions result in large geometric nonlinearities in structural response. The critical structural response is progressive buckling of the case. Discrete stiffeners have been added to these aft segments to aid in gaining maximum structural capability for minimum weight addition for resisting these loads. This report presents the development of the attrition assessment of the aft segments and includes the rationale for eliminating all assessable conservatisms from this assessment.

  15. Axially adjustable magnetic properties in arrays of multilayered Ni/Cu nanowires with variable segment sizes

    NASA Astrophysics Data System (ADS)

    Shirazi Tehrani, A.; Almasi Kashi, M.; Ramazani, A.; Montazer, A. H.

    2016-07-01

    Arrays of multilayered Ni/Cu nanowires (NWs) with variable segment sizes were fabricated into anodic aluminum oxide templates using a pulsed electrodeposition method in a single bath for designated potential pulse times. Increasing the pulse time between 0.125 and 2 s in the electrodeposition of Ni enabled the formation of segments with thicknesses ranging from 25 to 280 nm and 10-110 nm in 42 and 65 nm diameter NWs, respectively, leading to disk-shaped, rod-shaped and/or near wire-shaped geometries. Using hysteresis loop measurements at room temperature, the axial and perpendicular magnetic properties were investigated. Regardless of the segment geometry, the axial coercivity and squareness significantly increased with increasing Ni segment thickness, in agreement with a decrease in calculated demagnetizing factors along the NW length. On the contrary, the perpendicular magnetic properties were found to be independent of the pulse times, indicating a competition between the intrawire interactions and the shape demagnetizing field.

  16. Conformation and Dynamics of a Flexible Sheet in Solvent Media by Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Pandey, Ras; Anderson, Kelly; Heinz, Hendrik; Farmer, Barry

    2005-03-01

    Flexibility of the clay sheet is limited even in the ex-foliated state in some solvent media. A coarse grained model is used to investigate dynamics and conformation of a flexible sheet to model such a clay platelet in an effective solvent medium on a cubic lattice of size L^3 with lattice constant a. The undeformed sheet is described by a square lattice of size Ls^2, where, each node of the sheet is represented by the unit cube of the cubic lattice and 2a is the minimum distance between the nearest neighbor nodes to incorporate the excluded volume constraints. Additionally, each node interacts with neighboring nodes and solvent (empty) sites within a range ri. Each node execute their stochastic motion with the Metropolis algorithm subject to bond length fluctuation and excluded volume constraints. Mean square displacements of the center node and that of its center of mass are investigated as a function of time step for a set of these parameters. The radius of gyration (Rg) is also examined concurrently to understand its relaxation. Multi-scale segmental dynamics of the sheet is studied by identifying the power-law dependence in various time regimes. Relaxation of Rg and its dependence of temperature are planned to be discussed.

  17. Accumulative Difference Image Protocol for Particle Tracking in Fluorescence Microscopy Tested in Mouse Lymphonodes

    PubMed Central

    Villa, Carlo E.; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe

    2010-01-01

    The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done. PMID:20808918

  18. Accumulative difference image protocol for particle tracking in fluorescence microscopy tested in mouse lymphonodes.

    PubMed

    Villa, Carlo E; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe

    2010-08-17

    The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done.

  19. Automatic derivation of natural and artificial lineaments from ALS point clouds in floodplains

    NASA Astrophysics Data System (ADS)

    Mandlburger, G.; Briese, C.

    2009-04-01

    Water flow is one of the most important driving forces in geomorphology and river systems have ever since formed our landscapes. With increasing urbanisation fertile flood plains were more and more cultivated and the defence of valuable settlement areas by dikes and dams became an important issue. Today, we are dealing with landscapes built up by natural as well as man-made artificial forces. In either case the general shape of the terrain can be portrayed by lineaments representing discontinuities of the terrain slope. Our contribution, therefore, presents an automatic method for delineating natural and artificial structure lines based on randomly distributed point data with high density of more than one point/m2. Preferably, the last echoes of airborne laser scanning (ALS) point clouds are used, since the laser signal is able to penetrate vegetation through small gaps in the foliage. Alternatively, point clouds from (multi) image matching can be employed, but poor ground point coverage in vegetated areas is often the limiting factor. Our approach is divided into three main steps: First, potential 2D start segments are detected by analyzing the surface curvature in the vicinity of each data point, second, the detailed 3D progression of each structure line is modelled patch-wise by intersecting surface pairs (e.g. planar patch pairs) based on the detected start segments and by performing line growing and, finally, post-processing like line cleaning, smoothing and networking is carried out in a last step. For the initial detection of start segments a best fitting two dimensional polynomial surface (quadric) is computed in each data point based on a set of neighbouring points, from which the minimum and maximum curvature is derived. Patches showing high maximum and low minimum curvatures indicate linear discontinuities in the surface slope and serve as start segments for the subsequent 3D modelling. Based on the 2D location and orientation of the start segments, surface patches can be identified as to the left or the right of the structure line. For each patch pair the intersection line is determined by least squares adjustment. The stochastic model considers the planimetric accuracy of the start segments, and the vertical measurement errors in the data points. A robust estimation approach is embedded in the patch adjustment for elimination of off-terrain ALS last echo points. Starting from an initial patch pair, structure line modelling is continued in forward and backward direction as long as certain thresholds (e.g. minimum surface intersection angles) are fulfilled. In the final post-processing step the resulting line set is cleaned by connecting corresponding line parts, by removing short line strings of minor relevance, and by thinning the resulting line set with respect to a certain approximation tolerance in order to reduce the amount of line data. Thus, interactive human verification and editing is limited to a minimum. In a real-world example structure lines were computed for a section of the river Main (ALS, last echoes, 4 points/m2) demonstrating the high potential of the proposed method with respect to accuracy and completeness. Terrestrial control measurements have confirmed the high accuracy expectations both in planimetry (<0.4m) and height (<0.2m).

  20. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    PubMed

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  1. A novel pipeline for adrenal tumour segmentation.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Erdogan, Hasan; Sivri, Mesut

    2018-06-01

    Adrenal tumours occur on adrenal glands surrounded by organs and osteoid. These tumours can be categorized as either functional, non-functional, malign, or benign. Depending on their appearance in the abdomen, adrenal tumours can arise from one adrenal gland (unilateral) or from both adrenal glands (bilateral) and can connect with other organs, including the liver, spleen, pancreas, etc. This connection phenomenon constitutes the most important handicap against adrenal tumour segmentation. Size change, variety of shape, diverse location, and low contrast (similar grey values between the various tissues) are other disadvantages compounding segmentation difficulty. Few studies have considered adrenal tumour segmentation, and no significant improvement has been achieved for unilateral, bilateral, adherent, or noncohesive tumour segmentation. There is also no recognised segmentation pipeline or method for adrenal tumours including different shape, size, or location information. This study proposes an adrenal tumour segmentation (ATUS) pipeline designed to eliminate the above disadvantages for adrenal tumour segmentation. ATUS incorporates a number of image methods, including contrast limited adaptive histogram equalization, split and merge based on quadtree decomposition, mean shift segmentation, large grey level eliminator, and region growing. Performance assessment of ATUS was realised on 32 arterial and portal phase computed tomography images using six metrics: dice, jaccard, sensitivity, specificity, accuracy, and structural similarity index. ATUS achieved remarkable segmentation performance, and was not affected by the discussed handicaps, on particularly adherence to other organs, with success rates of 83.06%, 71.44%, 86.44%, 99.66%, 99.43%, and 98.51% for the metrics, respectively, for images including sufficient contrast uptake. The proposed ATUS system realises detailed adrenal tumour segmentation, and avoids known disadvantages preventing accurate segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. One Size (Never) Fits All: Segment Differences Observed Following a School-Based Alcohol Social Marketing Program

    ERIC Educational Resources Information Center

    Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason

    2015-01-01

    Background: According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. Methods: A sample of 371 year 10 students…

  3. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David

    2014-05-15

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less

  4. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    PubMed Central

    Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.

    2014-01-01

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380

  5. Analysis and sizing of Mars aerobrake structure

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Craft, W. J.

    1993-01-01

    A cone-sphere aeroshell structure for aerobraking into Martian atmosphere is studied. Using this structural configuration, a space frame load-bearing structure is proposed. To generate this structure efficiently and to perform a variety of studies of several configurations, a mesh generator that utilizes only a few configurational parameters is developed. A finite element analysis program that analyzes space frame structures was developed. A sizing algorithm that arrives at a minimum mass configuration was developed and integrated into the finite element analysis program. A typical 135-ft-diam aerobrake configuration was analyzed and sized. The minimum mass obtained in this study using high modulus graphite/epoxy composite material members is compared with the masses obtained from two other aerobrake structures using lightweight erectable tetrahedral truss and part-spherical truss configurations. Excellent agreement for the minimum mass was obtained with the three different aerobrake structures. Also, the minimum mass using the present structure was obtained when the supports were not at the base but at about 75 percent of the base diameter.

  6. 49 CFR 222.35 - What are the minimum requirements for quiet zones?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... within a single political jurisdiction shall be separated by at least one public highway-rail grade... include grade crossings on a segment of rail line crossing more than one political jurisdiction. (b... be credited in calculating the Quiet Zone Risk Index. (c) Advance warning signs. (1) Each highway...

  7. 49 CFR 222.35 - What are the minimum requirements for quiet zones?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... within a single political jurisdiction shall be separated by at least one public highway-rail grade... include grade crossings on a segment of rail line crossing more than one political jurisdiction. (b... be credited in calculating the Quiet Zone Risk Index. (c) Advance warning signs. (1) Each highway...

  8. Minimum Competencies: A National Survey.

    ERIC Educational Resources Information Center

    Bossone, Richard M.

    During the academic year 1977-78 a national survey was conducted to identify those competencies which various segments of the population consider important for functioning adults, and to ascertain which competencies should be taught in the schools. Data presented in this study are based on 2,908 questionnaire returns from 2,284 students (mostly in…

  9. MASSIVE LEAKAGE IRRADIATOR

    DOEpatents

    Wigner, E.P.; Szilard, L.; Christy, R.F.; Friedman, F.L.

    1961-05-30

    An irradiator designed to utilize the neutrons that leak out of a reactor around its periphery is described. It avoids wasting neutron energy and reduces interference with the core flux to a minimum. This is done by surrounding all or most of the core with removable segments of the material to be irradiated within a matrix of reflecting material.

  10. Quantitative test for concave aspheric surfaces using a Babinet compensator.

    PubMed

    Saxena, A K

    1979-08-15

    A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.

  11. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  12. 76 FR 56120 - Atlantic Highly Migratory Species; North and South Atlantic Swordfish Quotas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... Contracting Parties. Contracting Parties may restrict fishermen to a minimum size of 25 kg live weight OR 125... restrict fishermen to a minimum size of 15 kg live weight OR 119 cm LJFL with no tolerance. In 2009, NMFS... quota, among other things. Per the ATCA, the United States is obligated to implement ICCAT-approved...

  13. [Analysis of genomic copy number variations in two unrelated neonates with 8p deletion and duplication associated with congenital heart disease].

    PubMed

    Mei, Mei; Yang, Lin; Zhan, Guodong; Wang, Huijun; Ma, Duan; Zhou, Wenhao; Huang, Guoying

    2014-06-01

    To screen for genomic copy number variations (CNVs) in two unrelated neonates with multiple congenital abnormalities using Affymetrix SNP chip and try to find the critical region associated with congenital heart disease. Two neonates were tested for genomic copy number variations by using Cytogenetic SNP chip.Rare CNVs with potential clinical significance were selected of which deletion segments' size was larger than 50 kb and duplication segments' size was larger than 150 kb based on the analysis of ChAs software, without false positive CNVs and segments of normal population. The identified CNVs were compared with those of the cases in DECIPHER and ISCA databases. Eleven rare CNVs with size from 546.6-27 892 kb were identified in the 2 neonates. The deletion region and size of case 1 were 8p23.3-p23.1 (387 912-11 506 771 bp) and 11.1 Mb respectively, the duplication region and size of case 1 were 8p23.1-p11.1 (11 508 387-43 321 279 bp) and 31.8 Mb respectively. The deletion region and size of case 2 were 8p23.3-p23.1 (46 385-7 809 878 bp) and 7.8 Mb respectively, the duplication region and size of case 2 were 8p23.1-p11.21 (12 260 914-40 917 092 bp) and 28.7 Mb respectively. The comparison with Decipher and ISCA databases supported previous viewpoint that 8p23.1 had been associated with congenital heart disease and the region between 7 809 878-11 506 771 bp may play a role in the severe cardiac defects associated with 8p23.1 deletions. Case 1 had serious cardiac abnormalities whose GATA4 was located in the duplication segment and the copy number increased while SOX7 was located in the deletion segment and the copy number decreased. The region between 7 809 878-11 506 771 bp in 8p23.1 is associated with heart defects and copy number variants of SOX7 and GATA4 may result in congenital heart disease.

  14. Incidence and Significance of Spontaneous ST Segment Re-elevation After Reperfused Anterior Acute Myocardial Infarction - Relationship With Infarct Size, Adverse Remodeling, and Events at 1 Year.

    PubMed

    Cuenin, Léo; Lamoureux, Sophie; Schaaf, Mathieu; Bochaton, Thomas; Monassier, Jean-Pierre; Claeys, Marc J; Rioufol, Gilles; Finet, Gérard; Garcia-Dorado, David; Angoulvant, Denis; Elbaz, Meyer; Delarche, Nicolas; Coste, Pierre; Metge, Marc; Perret, Thibault; Motreff, Pascal; Bonnefoy-Cudraz, Eric; Vanzetto, Gérald; Morel, Olivier; Boussaha, Inesse; Ovize, Michel; Mewton, Nathan

    2018-04-25

    Up to 25% of patients with ST elevation myocardial infarction (STEMI) have ST segment re-elevation after initial regression post-reperfusion and there are few data regarding its prognostic significance.Methods and Results:A standard 12-lead electrocardiogram (ECG) was recorded in 662 patients with anterior STEMI referred for primary percutaneous coronary intervention (PPCI). ECGs were recorded 60-90 min after PPCI and at discharge. ST segment re-elevation was defined as a ≥0.1-mV increase in STMax between the post-PPCI and discharge ECGs. Infarct size (assessed as creatine kinase [CK] peak), echocardiography at baseline and follow-up, and all-cause death and heart failure events at 1 year were assessed. In all, 128 patients (19%) had ST segment re-elevation. There was no difference between patients with and without re-elevation in infarct size (CK peak [mean±SD] 4,231±2,656 vs. 3,993±2,819 IU/L; P=0.402), left ventricular (LV) ejection fraction (50.7±11.6% vs. 52.2±10.8%; P=0.186), LV adverse remodeling (20.1±38.9% vs. 18.3±30.9%; P=0.631), or all-cause mortality and heart failure events (22 [19.8%] vs. 106 [19.2%]; P=0.887) at 1 year. Among anterior STEMI patients treated by PPCI, ST segment re-elevation was present in 19% and was not associated with increased infarct size or major adverse events at 1 year.

  15. Cryopreservation of in vitro grown nodal segments of Rauvolfia serpentina by PVS2 vitrification.

    PubMed

    Ray, Avik; Bhattacharya, Sabita

    2008-01-01

    This paper describes the cryopreservation by PVS2 vitrification of Rauvolfia serpentina (L.) Benth ex kurz, an important tropical medicinal plant. The effects of type and size of explants, sucrose preculture (duration and concentration) and vitrification treatment were tested. Preliminary experiments with PVS1, 2 and 3 produced shoot growth only for PVS2. When optimizing the PVS2 vitrification of nodal segments, those of 0.31 - 0.39 cm in size were better than other nodal sizes and or apices. Sucrose preculture had a positive role in survival and subsequent regrowth of the cryopreserved explants. Seven days on 0.5 M sucrose solution significantly improved the viability of nodal segments. PVS2 incubation for 45 minutes combined with a 7-day preculture gave the optimum result of 66 percent. Plantlets derived after cryopreservation resumed growth and regenerated normally.

  16. Semiautomatic Segmentation of Glioma on Mobile Devices.

    PubMed

    Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun

    2017-01-01

    Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.

  17. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  18. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  19. 50 CFR 622.275 - Size limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...

  20. 50 CFR 622.275 - Size limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...

  1. 7 CFR 51.344 - Size.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...

  2. 7 CFR 51.344 - Size.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...

  3. 7 CFR 51.344 - Size.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...

  4. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  5. Three Essays In and Tests of Theoretical Urban Economics

    NASA Astrophysics Data System (ADS)

    Zhao, Weihua

    This dissertation consists of three essays on urban economics. The three essays are related to urban spatial structure change, energy consumption, greenhouse gas emissions, and housing redevelopment. Chapter 1 answers the question: Does the classic Standard Urban Model still describe the growth of cities? Chapter 2 derives the implications of telework on urban spatial structure, energy consumption, and greenhouse gas emissions. Chapter 3 investigates the long run effects of minimum lot size zoning on neighborhood redevelopment. Chapter 1 identifies a new implication of the classic Standard Urban Model, the "unitary elasticity property (UEP)", which is the sum of the elasticity of central density and the elasticity of land area with respect to population change is approximately equal to unity. When this implication of the SUM is tested, it fits US cities fairly well. Further analysis demonstrates that topographic barriers and age of housing stock are the key factors explaining deviation from the UEP. Chapter 2 develops a numerical urban simulation model with households that are able to telework to investigate the urban form, congestion, energy consumption and greenhouse gas emission implications of telework. Simulation results suggest that by reducing transportation costs, telework causes sprawl, with associated longer commutes and consumption of larger homes, both of which increase energy consumption. Overall effects depend on who captures the gains from telework (workers versus firms), urban land use regulation such as height limits or greenbelts, and the fraction of workers participating in telework. The net effects of telework on energy use and GHG emissions are generally negligible. Chapter 3 applies dynamic programming to investigate the long run effects of minimum lot size zoning on neighborhood redevelopment. With numerical simulation, comparative dynamic results show that minimum lot size zoning can delay initial land conversion and slow down demolition and housing redevelopment. Initially, minimum lot size zoning is not binding. However, as city grows, it becomes binding and can effectively distort housing supply. It can lower both floor area ratio and residential density, and reduce aggregate housing supply. Overall, minimum lot size zoning can stabilize the path of structure/land ratios, housing service levels, structure density, and housing prices. In addition, minimum lot size zoning provides more incentive for developer to maintain the building, slow structure deterioration, and raise the minimum level of housing services provided over the life cycle of development.

  6. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  7. Understanding the market for geographic information: A market segmentation and characteristics analysis

    NASA Technical Reports Server (NTRS)

    Piper, William S.; Mick, Mark W.

    1994-01-01

    Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.

  8. SU-F-T-592: A Delivery QA-Free Approach for Adaptive Therapy of Prostate Cancer with Static Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, T; Dooley, J; Zhu, T

    2016-06-15

    Purpose: Clinical implementations of adaptive radiotherapy (ART) are limited mainly by the requirement of delivery QA (DQA) prior to the treatment. Small segment size and small segment MU are two dominant factors causing failures of DQA. The aim of this project is to explore the feasibility of ART treatment without DQA by using a partial optimization approach. Methods: A retrospective simulation study was performed on two prostate cancer patients treated with SMLC-IMRT. The prescription was 180cGx25 fractions with daily CT-on-rail imaging for target alignment. For each patient, seven daily CTs were selected randomly across treatment course. The contours were deformablelymore » transferred from the simulation CT onto the daily CTs and modified appropriately. For each selected treatment, dose distributions from original beams were calculated on the daily treatment CTs (DCT plan). An ART plan was also created by optimizing the segmental MU only, while the segment shapes were preserved and the minimum MU constraint was respected. The overlaps, between PTV and the rectum, between PTV and the bladder, were normalized by the PTV volume. This ratio was used to characterize the difficulty of organs-at-risk (OAR) sparing. Results: Comparing to the original plan, PTV coverage was compromised significantly in DCT plans (82% ± 7%) while all ART plans preserved PTV coverage. ART plans showed similar OAR sparing as the original plan, such as V40Gy=11.2cc (ART) vs 11.4cc (original) for the rectum and D10cc=4580cGy vs 4605cGy for the bladder. The sparing of the rectum/bladder depends on overlap ratios. The sparing in ART was either similar or improved when overlap ratios in treatment CTs were smaller than those in original plan. Conclusion: A partial optimization method is developed that may make the real-time ART feasible on selected patients. Future research is warranted to quantify the applicability of the proposed method.« less

  9. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  10. Lateralization of music processing with noises in the auditory cortex: an fNIRS study.

    PubMed

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts.

  11. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    PubMed Central

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts. PMID:25538583

  12. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  13. Comparative Evaluation of the Corneal and Anterior Chamber Parameters Derived From Scheimpflug Imaging in Arab and South Asian Normal Eyes.

    PubMed

    Prakash, Gaurav; Srivastava, Dhruv; Avadhani, Kavitha; Thirumalai, Sandeep M; Choudhuri, Sounak

    2015-11-01

    To evaluate the differences in the normal corneal and anterior segment Scheimpflug parameters in Arab and South Asian eyes. This hospital-based study was performed at a cornea and refractive surgery service in Abu Dhabi. A total of 600 consecutive normal candidates of South Asian (group 1, n = 300) and Arab (group 2, n = 300) origins underwent Scheimpflug imaging (Sirius; Costruzione Strumenti Oftalmici, Italy). One eye was randomly selected for evaluation. The age and sex distributions in both groups were comparable. The pachymetric variables were statistically higher in group 2 (group 2 vs. group 1, 544.3 ± 32.2 μm vs. 535.1 ± 31.4 μm for central corneal thickness, 541.0 ± 32.6 μm vs. 531.9 ± 31.5 μm for minimum corneal thickness, 571.7 ± 43.2 μm vs. 558.1 ± 42.3 μm for apical thickness, and 58.1 ± 4.2 vs. 57.3 ± 4.3 mm³ for the corneal volume; P < 0.05). The anterior chamber volume (group 2 vs. group 1: 166.4 ± 16.4 vs. 161.6 ± 20.5 mm³) and angle (group 2 vs. group 1: 44.6 ± 6.2 vs. 43.5 ± 5.8 degrees) were also higher for group 2 (P < 0.05). Central corneal curvature and apical corneal curvature (apex K) were higher in group 1 (P < 0.05) with comparable astigmatism. The flat keratometry (K), steep K, and apex K were 43.6 ± 2.2 diopters (D), 44.9 ± 1.8 D, and 45.7 ± 1.8 D for group 1, and 43.1 ± 2.2 D, 44.5 ± 2 D, and 45.2 ± 1.9 D for group 2. The effect size (Cohen d) for significant parameters ranged from 0.2 to 0.3. Normal eyes of Arab ethnicity tend to have statistically thicker and flatter corneas and less-crowded anterior segments than those of the South Asian counterparts. These epidemiological differences have a mild to moderate biological effect size (Cohen d), but they should be considered when evaluating these eyes for anterior segment or corneal procedures.

  14. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  15. Automatic segmentation and measurements of gestational sac using static B-mode ultrasound images

    NASA Astrophysics Data System (ADS)

    Ibrahim, Dheyaa Ahmed; Al-Assam, Hisham; Du, Hongbo; Farren, Jessica; Al-karawi, Dhurgham; Bourne, Tom; Jassim, Sabah

    2016-05-01

    Ultrasound imagery has been widely used for medical diagnoses. Ultrasound scanning is safe and non-invasive, and hence used throughout pregnancy for monitoring growth. In the first trimester, an important measurement is that of the Gestation Sac (GS). The task of measuring the GS size from an ultrasound image is done manually by a Gynecologist. This paper presents a new approach to automatically segment a GS from a static B-mode image by exploiting its geometric features for early identification of miscarriage cases. To accurately locate the GS in the image, the proposed solution uses wavelet transform to suppress the speckle noise by eliminating the high-frequency sub-bands and prepare an enhanced image. This is followed by a segmentation step that isolates the GS through the several stages. First, the mean value is used as a threshold to binarise the image, followed by filtering unwanted objects based on their circularity, size and mean of greyscale. The mean value of each object is then used to further select candidate objects. A Region Growing technique is applied as a post-processing to finally identify the GS. We evaluated the effectiveness of the proposed solution by firstly comparing the automatic size measurements of the segmented GS against the manual measurements, and then integrating the proposed segmentation solution into a classification framework for identifying miscarriage cases and pregnancy of unknown viability (PUV). Both test results demonstrate that the proposed method is effective in segmentation the GS and classifying the outcomes with high level accuracy (sensitivity (miscarriage) of 100% and specificity (PUV) of 99.87%).

  16. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...

  17. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classifications. 51.2836 Section 51.2836...-Granex-Grano and Creole Types) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum...

  18. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classifications. 51.2836 Section 51.2836...-Granex-Grano and Creole Types) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum...

  19. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...

  20. 7 CFR 51.2836 - Size classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...

  1. Population demographics and genetic diversity in remnant and translocated populations of sea otters

    USGS Publications Warehouse

    Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.

    1999-01-01

    The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.

  2. The costal skeleton of the Regourdou 1 Neandertal.

    PubMed

    Gómez-Olivencia, Asier; Holliday, Trenton; Madelaine, Stéphane; Couture-Veschambre, Christine; Maureille, Bruno

    2018-02-26

    The morphology and size of the Neandertal thorax is a subject of growing interest due to its link to general aspects of body size and shape, including physiological aspects related to bioenergetics and activity budgets. However, the number of well-preserved adult Neandertal costal remains is still low. The recent finding of new additional costal remains from the Regourdou 1 (R1) skeleton has rendered this skeleton as one of the most complete Neandertal costal skeletons with a minimum of 18 ribs represented, five of which are complete or virtually complete. Here we describe for the first time all the rib remains from R1 and compare them to a large modern Euroamerican male sample as well as to other published Neandertal individuals. The costal skeleton of this individual shows significant metric and morphological differences from our modern human male comparative sample. The perceived differences include: dorsoventrally large 1st and 2nd ribs, 3rd ribs with a very closed dorsal curvature and large maximum diameters at the posterior angle, a large tubercle-iliocostal line distance in the 4th rib, thick shafts at the dorsal end of its 6th ribs, thick mid-shafts of the 8th ribs, large articular tubercles at the 9th ribs, and thick shafts of the 11th and 12th ribs. Here we also describe a new mesosternal fragment: the left lateral half of sternebral segments 4 and 5. This portion reveals that the mesosternum of R1 had a sternal foramen in its inferiormost preserved sternal segment and supports previous estimation of the total length of this mesosternum. The new costal remains from R1 support the view that Neandertals, when compared with modern humans, show a significantly different thorax, consistent with differences found in other anatomical regions such as the vertebral column and pelvis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    DTIC Science & Technology

    2015-04-30

    fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values

  4. Microplitis demolitor bracovirus genome segments vary in abundance and are individually packaged in virions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, Markus H.; Inman, Ross B.; Strand, Michael R.

    2007-03-01

    Polydnaviruses (PDVs) are distinguished by their unique association with parasitoid wasps and their segmented, double-stranded (ds) DNA genomes that are non-equimolar in abundance. Relatively little is actually known, however, about genome packaging or segment abundance of these viruses. Here, we conducted electron microscopy (EM) and real-time polymerase chain reaction (PCR) studies to characterize packaging and segment abundance of Microplitis demolitor bracovirus (MdBV). Like other PDVs, MdBV replicates in the ovaries of females where virions accumulate to form a suspension called calyx fluid. Wasps then inject a quantity of calyx fluid when ovipositing into hosts. The MdBV genome consists of 15more » segments that range from 3.6 (segment A) to 34.3 kb (segment O). EM analysis indicated that MdBV virions contain a single nucleocapsid that encapsidates one circular DNA of variable size. We developed a semi-quantitative real-time PCR assay using SYBR Green I. This assay indicated that five (J, O, H, N and B) segments of the MdBV genome accounted for more than 60% of the viral DNAs in calyx fluid. Estimates of relative segment abundance using our real-time PCR assay were also very similar to DNA size distributions determined from micrographs. Analysis of parasitized Pseudoplusia includens larvae indicated that copy number of MdBV segments C, B and J varied between hosts but their relative abundance within a host was virtually identical to their abundance in calyx fluid. Among-tissue assays indicated that each viral segment was most abundant in hemocytes and least abundant in salivary glands. However, the relative abundance of each segment to one another was similar in all tissues. We also found no clear relationship between MdBV segment and transcript abundance in hemocytes and fat body.« less

  5. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  6. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  7. Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.

    PubMed

    Zhao, Liya; Jia, Kebin

    2016-01-01

    Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.

  8. Usher syndrome type 1–associated cadherins shape the photoreceptor outer segment

    PubMed Central

    Parain, Karine; Aghaie, Asadollah; Picaud, Serge

    2017-01-01

    Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis, these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23, encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15–containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. PMID:28495838

  9. Usher syndrome type 1-associated cadherins shape the photoreceptor outer segment.

    PubMed

    Schietroma, Cataldo; Parain, Karine; Estivalet, Amrit; Aghaie, Asadollah; Boutet de Monvel, Jacques; Picaud, Serge; Sahel, José-Alain; Perron, Muriel; El-Amraoui, Aziz; Petit, Christine

    2017-06-05

    Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis , these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23 , encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15-containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. © 2017 Schietroma et al.

  10. Ultra-Stable Segmented Telescope Sensing and Control Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David

    2017-01-01

    The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.

  11. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  12. Appliance Efficiency Standards and Price Discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spurlock, Cecily Anna

    2013-05-08

    I explore the effects of two simultaneous changes in minimum energy efficiency and ENERGY STAR standards for clothes washers. Adapting the Mussa and Rosen (1978) and Ronnen (1991) second-degree price discrimination model, I demonstrate that clothes washer prices and menus adjusted to the new standards in patterns consistent with a market in which firms had been price discriminating. In particular, I show evidence of discontinuous price drops at the time the standards were imposed, driven largely by mid-low efficiency segments of the market. The price discrimination model predicts this result. On the other hand, in a perfectly competition market, pricesmore » should increase for these market segments. Additionally, new models proliferated in the highest efficiency market segment following the standard changes. Finally, I show that firms appeared to use different adaptation strategies at the two instances of the standards changing.« less

  13. Electrocardiographic evaluation of reperfusion therapy in patients with acute myocardial infarction.

    PubMed

    Clemmensen, P

    1996-02-01

    The present thesis is based on 6 previously published clinical studies in patients with AMI. Thrombolytic therapy for patients with AMI improves early infarct coronary artery patency, limits AMI size, improves left ventricular function and survival, as demonstrated in large placebo-controlled clinical trials. With the advent of interventions aimed at limiting AMI size it became important to assess the amount of ischemic myocardium in the early phase of AMI, and to develop noninvasive methods for evaluation of these therapies. The aims of the present studies were to develop such methods. The studies have included 267 patients with AMI admitted up to 12 hours after onset of symptoms. All included patients had acute ECG ST-segment changes indicating subepicardial ischemia, and patients with bundle branch block were excluded. Serial ECG's were analyzed with quantitative ST-segment measurements in the acute phase and compared to the Selvester QRS score estimated final AMI size. These ECG indices were compared to and validated through comparisons with other independent noninvasive and invasive methods, used for the purpose of evaluating patients with AMI treated with thrombolytic therapy. It was found that in patients with first AMI not treated with reperfusion therapies the QRS score estimated final AMI size can be predicted from the acute ST-segment elevation. Based on the number of ECG leads with ST-segment elevation and its summated magnitude, formulas were developed to provide an "ST score" for estimating the amount of myocardium in jeopardy during the early phase of AMI. The ST-segment deviation present in the ECG in patients with documented occlusion of the infarct related coronary artery, was subsequently shown to correlate with the degree of regional and global left ventricular dysfunction. Because serial changes in ST-segment elevation, during the acute phase of AMI were believed to reflect changes is myocardial ischemia and thus possibly infarct artery patency status, the summated ST-segment elevation present on the admission ECG was compared to that present after administration of intravenous thrombolytic therapy, and immediately prior to angiographic visualization of the infarct related coronary artery. The entire spectrum of sensitivities and specificities, derived from different cut-off values for the degree of ST-segment normalization, was described for the first time. It was found that a 20% decrease in ST-segment elevation could predict coronary artery patency with a high level of accuracy: positive predictive value = 88% and negative predictive value = 80%.(ABSTRACT TRUNCATED)

  14. Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments

    NASA Technical Reports Server (NTRS)

    Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.

    2012-01-01

    To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.

  15. Minimum length from quantum mechanics and classical general relativity.

    PubMed

    Calmet, Xavier; Graesser, Michael; Hsu, Stephen D H

    2004-11-19

    We derive fundamental limits on measurements of position, arising from quantum mechanics and classical general relativity. First, we show that any primitive probe or target used in an experiment must be larger than the Planck length lP. This suggests a Planck-size minimum ball of uncertainty in any measurement. Next, we study interferometers (such as LIGO) whose precision is much finer than the size of any individual components and hence are not obviously limited by the minimum ball. Nevertheless, we deduce a fundamental limit on their accuracy of order lP. Our results imply a device independent limit on possible position measurements.

  16. Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research

    NASA Astrophysics Data System (ADS)

    Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.

    2010-12-01

    Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.

  17. Edges in CNC polishing: from mirror-segments towards semiconductors, paper 1: edges on processing the global surface.

    PubMed

    Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony

    2012-08-27

    Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.

  18. Early Use of N-acetylcysteine With Nitrate Therapy in Patients Undergoing Primary Percutaneous Coronary Intervention for ST-Segment-Elevation Myocardial Infarction Reduces Myocardial Infarct Size (the NACIAM Trial [N-acetylcysteine in Acute Myocardial Infarction]).

    PubMed

    Pasupathy, Sivabaskari; Tavella, Rosanna; Grover, Suchi; Raman, Betty; Procter, Nathan E K; Du, Yang Timothy; Mahadavan, Gnanadevan; Stafford, Irene; Heresztyn, Tamila; Holmes, Andrew; Zeitz, Christopher; Arstall, Margaret; Selvanayagam, Joseph; Horowitz, John D; Beltrame, John F

    2017-09-05

    Contemporary ST-segment-elevation myocardial infarction management involves primary percutaneous coronary intervention, with ongoing studies focusing on infarct size reduction using ancillary therapies. N-acetylcysteine (NAC) is an antioxidant with reactive oxygen species scavenging properties that also potentiates the effects of nitroglycerin and thus represents a potentially beneficial ancillary therapy in primary percutaneous coronary intervention. The NACIAM trial (N-acetylcysteine in Acute Myocardial Infarction) examined the effects of NAC on infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. This randomized, double-blind, placebo-controlled, multicenter study evaluated the effects of intravenous high-dose NAC (29 g over 2 days) with background low-dose nitroglycerin (7.2 mg over 2 days) on early cardiac magnetic resonance imaging-assessed infarct size. Secondary end points included cardiac magnetic resonance-determined myocardial salvage and creatine kinase kinetics. Of 112 randomized patients with ST-segment-elevation myocardial infarction, 75 (37 in NAC group, 38 in placebo group) underwent early cardiac magnetic resonance imaging. Median duration of ischemia pretreatment was 2.4 hours. With background nitroglycerin infusion administered to all patients, those randomized to NAC exhibited an absolute 5.5% reduction in cardiac magnetic resonance-assessed infarct size relative to placebo (median, 11.0%; [interquartile range 4.1, 16.3] versus 16.5%; [interquartile range 10.7, 24.2]; P =0.02). Myocardial salvage was approximately doubled in the NAC group (60%; interquartile range, 37-79) compared with placebo (27%; interquartile range, 14-42; P <0.01) and median creatine kinase areas under the curve were 22 000 and 38 000 IU·h in the NAC and placebo groups, respectively ( P =0.08). High-dose intravenous NAC administered with low-dose intravenous nitroglycerin is associated with reduced infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. A larger study is required to assess the impact of this therapy on clinical cardiac outcomes. Australian New Zealand Clinical Trials Registry. URL: http://www.anzctr.org.au/. Unique identifier: 12610000280000. © 2017 American Heart Association, Inc.

  19. Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) program (update to automatic flight trajectory design, performance prediction, and vehicle sizing for support of Shuttle and Shuttle derived vehicles) engineering manual

    NASA Technical Reports Server (NTRS)

    Lyons, J. T.

    1993-01-01

    The Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) program and its predecessors, the ROBOT and the RAGMOP programs, have had a long history of supporting MSFC in the simulation of space boosters for the purpose of performance evaluation. The ROBOT program was used in the simulation of the Saturn 1B and Saturn 5 vehicles in the 1960's and provided the first utilization of the minimum Hamiltonian (or min-H) methodology and the steepest ascent technique to solve the optimum trajectory problem. The advent of the Space Shuttle in the 1970's and its complex airplane design required a redesign of the trajectory simulation code since aerodynamic flight and controllability were required for proper simulation. The RAGMOP program was the first attempt to incorporate the complex equations of the Space Shuttle into an optimization tool by using an optimization method based on steepest ascent techniques (but without the min-H methodology). Development of the complex partial derivatives associated with the Space Shuttle configuration and using techniques from the RAGMOP program, the ROBOT program was redesigned to incorporate these additional complexities. This redesign created the MASTRE program, which was referred to as the Minimum Hamiltonian Ascent Shuttle TRajectory Evaluation program at that time. Unique to this program were first-stage (or booster) nonlinear aerodynamics, upper-stage linear aerodynamics, engine control via moment balance, liquid and solid thrust forces, variable liquid throttling to maintain constant acceleration limits, and a total upgrade of the equations used in the forward and backward integration segments of the program. This modification of the MASTRE code has been used to simulate the new space vehicles associated with the National Launch Systems (NLS). Although not as complicated as the Space Shuttle, the simulation and analysis of the NLS vehicles required additional modifications to the MASTRE program in the areas of providing additional flexibility in the use of the program, allowing additional optimization options, and providing special options for the NLS configuration.

  20. Behavior and Microstructure in Cryomilled Aluminum alloy Containing Diamondoids Nanoparticles

    NASA Astrophysics Data System (ADS)

    Hanna, Walid Magdy

    Aluminum (Al) alloys have been the materials of choice for both civil and military aircraft structure. Primary among these alloys are 6061 Al and 5083 Al, which have used for several structural applications including those in aerospace and automobile industry. It is desirable to enhance strength in Al alloys beyond that achieved via traditional techniques such as precipitation hardening. Recent developments have indicated strengthening via grain refinement is an effective approach since, according the Hall-Petch relation, as grain size decreases strength significantly increases. The innovate techniques of severe plastic deformation, cryomilling, are successful in reefing grain size. These techniques lead to a minimum grain size that is the result of a dynamic balance between the formation of dislocation structure and its recovery by thermal processes. According to Mohamed's model, each metal is characterized by a minimum grain size that is determined by materials parameters such as the stacking faulty energy and the activation energy for diffusion. In the present dissertation, 6061 Al and 5083 Al were synthesized using cryomilling. Microstructural characterization was extensively carried out to monitor grain size changes. A close examination of the morphology of the 6061 Al powder particles revealed that in the early milling stages, the majority of the particles changed from spheres to thin disk-shaped particles. This change was attributed to the high degree of plastic deformation generated by the impact energy during ball-powder-ball collisions. Both transmission electron microscopy (TEM) and X-ray diffraction (XRD) were used to monitor the change in grain size as a function of milling time. The results of both techniques demonstrated a close agreement with respect to two observations: (a) during cryomilling, the grain size of 6061 Al decreased with milling time, and (b) after 15 h of milling, the grain size approached a minimum value of about 22 nm, which is in the range reported for Al (18 nm--24 nm). Despite this agreement, there was a discrepancy: for grain sizes > 40 nm, the grain size measured by TEM was appreciably larger that inferred from XRD. It was suggested that this discrepancy was most likely related to the limitation for accurately measuring grain sizes > 100 nm by the technique of XRD. It was reported that the average grain size of the as-milled powders of 5083 Al alloy was about 20 nm, and that when the as-milled powders were exposed to elevated temperatures or consolidated via hot isostatic pressing and extruded, the average grain size increased to about 250 nm. Very recent results have indicated the success of maintaining the thermal stability of Al by adding diamantane during milling. 5083 Al powders were cryomilled with 0.5 wt. % diamantane for 8 hours producing mechanically alloyed powders with an average grain size of 17 nm. The grain size remained nanocrystalline (less than100 nm) for Al 5083 alloy with 0.5% diamantane, even after 48 h at the highest temperature of 773 K. The Effect of Diamantane on the thermal stability of cryomilled nanocrystalline 5083 Al alloy was investigated by heating the powder in an inert gas atmosphere at temperature range from 473K to 773K for time interval between 0.5 hr. to 48 hr. The average grain size was observed to be in nano scale range less than 100 nm. The thermal stability results were found to be consistent with the grain growth model based on drag forces exerted by dispersed particles against grain boundary migration (Burke model). As observed for other cryomilled Al alloys, two grain growth regimes were identified using this model: one at relatively low temperatures (473--623 K) where the activation energy is about 1.9 kJ/mole and another at higher temperatures where the activation energy is about 18 kJ/mole. The presence of the former region was explained in terms of stress relaxation facilitated by less stable processes such as recovery of dislocation segments or sub-boundary remnants while the latter region was attributed to grain boundary realignment annihilation of grain boundary remnants.

  1. Segmentation-based L-filtering of speckle noise in ultrasonic images

    NASA Astrophysics Data System (ADS)

    Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis

    1994-05-01

    We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.

  2. Quantitative Research on the Minimum Wage

    ERIC Educational Resources Information Center

    Goldfarb, Robert S.

    1975-01-01

    The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)

  3. Position Between Trunk and Pelvis During Gait Depending on the Gross Motor Function Classification System.

    PubMed

    Sanz-Mengibar, Jose Manuel; Altschuck, Natalie; Sanchez-de-Muniain, Paloma; Bauer, Christian; Santonja-Medina, Fernando

    2017-04-01

    To understand whether there is a trunk postural control threshold in the sagittal plane for the transition between the Gross Motor Function Classification System (GMFCS) levels measured with 3-dimensional gait analysis. Kinematics from 97 children with spastic bilateral cerebral palsy from spine angles according to Plug-In Gait model (Vicon) were plotted relative to their GMFCS level. Only average and minimum values of the lumbar spine segment correlated with GMFCS levels. Maximal values at loading response correlated independently with age at all functional levels. Average and minimum values were significant when analyzing age in combination with GMFCS level. There are specific postural control patterns in the average and minimum values for the position between trunk and pelvis in the sagittal plane during gait, for the transition among GMFCS I-III levels. Higher classifications of gross motor skills correlate with more extended spine angles.

  4. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  5. Varying behavior of different window sizes on the classification of static and dynamic physical activities from a single accelerometer.

    PubMed

    Fida, Benish; Bernabucci, Ivan; Bibbo, Daniele; Conforto, Silvia; Schmid, Maurizio

    2015-07-01

    Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Perioperative strategy in colonic surgery; LAparoscopy and/or FAst track multimodal management versus standard care (LAFA trial)

    PubMed Central

    Wind, Jan; Hofland, Jan; Preckel, Benedikt; Hollmann, Markus W; Bossuyt, Patrick MM; Gouma, Dirk J; van Berge Henegouwen, Mark I; Fuhring, Jan Willem; Dejong, Cornelis HC; van Dam, Ronald M; Cuesta, Miguel A; Noordhuis, Astrid; de Jong, Dick; van Zalingen, Edith; Engel, Alexander F; Goei, T Hauwy; de Stoppelaar, I Erica; van Tets, Willem F; van Wagensveld, Bart A; Swart, Annemiek; van den Elsen, Maarten JLJ; Gerhards, Michael F; de Wit, Laurens Th; Siepel, Muriel AM; van Geloven, Anna AW; Juttmann, Jan-Willem; Clevers, Wilfred; Bemelman, Willem A

    2006-01-01

    Background Recent developments in large bowel surgery are the introduction of laparoscopic surgery and the implementation of multimodal fast track recovery programs. Both focus on a faster recovery and shorter hospital stay. The randomized controlled multicenter LAFA-trial (LAparoscopy and/or FAst track multimodal management versus standard care) was conceived to determine whether laparoscopic surgery, fast track perioperative care or a combination of both is to be preferred over open surgery with standard care in patients having segmental colectomy for malignant disease. Methods/design The LAFA-trial is a double blinded, multicenter trial with a 2 × 2 balanced factorial design. Patients eligible for segmental colectomy for malignant colorectal disease i.e. right and left colectomy and anterior resection will be randomized to either open or laparoscopic colectomy, and to either standard care or the fast track program. This factorial design produces four treatment groups; open colectomy with standard care (a), open colectomy with fast track program (b), laparoscopic colectomy with standard care (c), and laparoscopic surgery with fast track program (d). Primary outcome parameter is postoperative hospital length of stay including readmission within 30 days. Secondary outcome parameters are quality of life two and four weeks after surgery, overall hospital costs, morbidity, patient satisfaction and readmission rate. Based on a mean postoperative hospital stay of 9 +/- 2.5 days a group size of 400 patients (100 each arm) can reliably detect a minimum difference of 1 day between the four arms (alfa = 0.95, beta = 0.8). With 100 patients in each arm a difference of 10% in subscales of the Short Form 36 (SF-36) questionnaire and social functioning can be detected. Discussion The LAFA-trial is a randomized controlled multicenter trial that will provide evidence on the merits of fast track perioperative care and laparoscopic colorectal surgery in patients having segmental colectomy for malignant disease. PMID:17134506

  7. Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans

    NASA Astrophysics Data System (ADS)

    Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.

    2008-03-01

    Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.

  8. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  9. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  10. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  11. 7 CFR 51.3198 - Size classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...

  12. Voluminous lava flow from Axial Seamount's south rift constrains extension rate on northern Vance Segment

    NASA Astrophysics Data System (ADS)

    Le Saout, M.; Clague, D. A.; Paduan, J. B.

    2017-12-01

    Axial Seamount is characterized by a robust magma supply resulting from the interaction between the Cobb hotspot and the Juan de Fuca Ridge. During the last two decades, magmatic activity was focused within the summit caldera and upper and middle portions of the two rift zones, with eruptions in 1998, 2011, and 2015. However, the distal ends of both rift zones have experienced numerous eruptions in the past. The most voluminous flows are located near the extreme ends, greater than 40 kilometers from the caldera. Where Axial's South Rift Zone overlaps with the Vance Segment of the Juan de Fuca Ridge, the 2015 MBARI expedition mapped 16 km2 of the seafloor with our AUV, and collected 33 rocks and 33 sediment cores during two ROV dives. The data were used to confirm the boundaries of an extensive flow tentatively identified using modern ship based bathymetry. This flow is 18 km wide and 6 km long for a total surface area of 63 km2. The flow is modified by superficial ( 5 m deep) and deep (25 to 45 m deep) subsidence pits, with the deepest pits giving an indication of the minimum thickness of the flow. The maximum thickness of 100 m is measured at the margins of the flow. We thus estimate a volume between 2.5 and 6 km3, making this flow the most voluminous known on the global mid ocean ridge system. The minimum volume is equivalent to the present volume of the summit caldera. Radiocarbon ages of foraminifera from the basal sections of sediment cores suggest that this flow is 1000 years old. This flow travelled east and partially filled the axial valley of the adjacent Vance Segment. Since emplacement, this part of the flow has experienced deformation by fissures and faults aligned with the trend of the Vance Segment. The horizontal extension across these features allows us to estimate a local deformation rate of 3 cm/yr of tectonic extension on the northern end of Vance Segment during the last 1000 years.

  13. A shape prior-based MRF model for 3D masseter muscle segmentation

    NASA Astrophysics Data System (ADS)

    Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe

    2012-02-01

    Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.

  14. Layered motion segmentation and depth ordering by tracking edges.

    PubMed

    Smith, Paul; Drummond, Tom; Cipolla, Roberto

    2004-04-01

    This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.

  15. On the use of big-bang method to generate low-energy structures of atomic clusters modeled with pair potentials of different ranges.

    PubMed

    Marques, J M C; Pais, A A C C; Abreu, P E

    2012-02-05

    The efficiency of the so-called big-bang method for the optimization of atomic clusters is analysed in detail for Morse pair potentials with different ranges; here, we have used Morse potentials with four different ranges, from long- ρ = 3) to short-ranged ρ = 14) interactions. Specifically, we study the efficacy of the method in discovering low-energy structures, including the putative global minimum, as a function of the potential range and the cluster size. A new global minimum structure for long-ranged ρ = 3) Morse potential at the cluster size of n= 240 is reported. The present results are useful to assess the maximum cluster size for each type of interaction where the global minimum can be discovered with a limited number of big-bang trials. Copyright © 2011 Wiley Periodicals, Inc.

  16. Impact of minimum catch size on the population viability of Strombus gigas (Mesogastropoda: Strombidae) in Quintana Roo, Mexico.

    PubMed

    Peel, Joanne R; Mandujano, María del Carmen

    2014-12-01

    The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the population. When conch extraction was simulated using a classification criteria based on lip thickness, it had a much smaller impact on the population density. We concluded that the best management strategy for S. gigas is a minimum fishing size based on a lip thickness, since it has lower impact on the population density, and given that selective fishing pressure based on size may lead to the appearance of small adult individuals with reduced fecundity. Furthermore, based on the reproductive biology and the results of the simulated fishing, we suggest a minimum lip thickness of ≥ 15 mm, which ensures the protection of reproductive stages, reduces the risk of overfishing, leading to non-viable density reduction.

  17. Estimation of representative elementary volume for DNAPL saturation and DNAPL-water interfacial areas in 2D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun

    2017-06-01

    Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.

  18. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  19. Evaluation of AQUI-S(TM) (efficacy and minimum toxic concentration) as a fish anaesthetic/sedative for public aquaculture in the United States

    USGS Publications Warehouse

    Stehly, G.R.; Gingerich, W.H.

    1999-01-01

    A preliminary evaluation of efficacy and minimum toxic concentration of AQUI-S(TM), a fish anaesthetic/sedative, was determined in two size classes of six species of fish important to US public aquaculture (bluegill, channel catfish, lake trout, rainbow trout, walleye and yellow perch). In addition, efficacy and minimum toxic concentration were determined in juvenile-young adult (fish aged 1 year or older) rainbow trout acclimated to water at 7 ??C, 12 ??C and 17 ??C. Testing concentrations were based on determinations made with range-finding studies for both efficacy and minimum toxic concentration. Most of the tested juvenile-young adult fish species were induced in 3 min or less at a nominal AQUI-S(TM) concentration of 20 mg L-1. In juvenile-young adult fish, the minimum toxic concentration was at least 2.5 times the selected efficacious concentration. Three out of five species of fry-fingerlings (1.25-12.5 cm in length and < 1 year old) were induced in ??? 4.1 min at a nominal concentration of 20 mg L-1 AQUI-S(TM), with the other two species requiring nominal concentrations of 25 and 35 mg L-1 for similar times of induction. Recovery times were ??? 7.3 rain for all species in the two size classes. In fry-fingerlings, the minimum toxic concentration was at least 1.4 times the selected efficacious concentration. There appeared to be little relationship between size of fish and concentrations or times to induction, recovery times and minimum toxic concentration. The times required for induction and for recovery were increased in rainbow trout as the acclimation temperature was reduced.

  20. What Is the Internet, Who Is Running It and How Is It Used?

    ERIC Educational Resources Information Center

    Eschbach, Darel

    The Internet, for the purposes of this discussion, refers to the network that has the National Science Foundation Network (NSFNET) as its backbone. For this paper, internet is the larger connection of networks that provides a minimum basic connection for electronic mail. The network is made up of many segments structured in a multitiered hierarchy…

  1. Transition path time distributions for Lévy flights

    NASA Astrophysics Data System (ADS)

    Janakiraman, Deepika

    2018-07-01

    This paper presents a study of transition path time distributions for Lévy noise-induced barrier crossing. Transition paths are short segments of the reactive trajectories and span the barrier region of the potential without spilling into the reactant/product wells. The time taken to traverse this segment is referred to as the transition path time. Since the transition path is devoid of excursions in the minimum, the corresponding time will give the exclusive barrier crossing time, unlike . This work explores the distribution of transition path times for superdiffusive barrier crossing, analytically. This is made possible by approximating the barrier by an inverted parabola. Using this approximation, the distributions are evaluated in both over- and under-damped limits of friction. The short-time behaviour of the distributions, provide analytical evidence for single-step transition events—a feature in Lévy-barrier crossing as observed in prior simulation studies. The average transition path time is calculated as a function of the Lévy index (α), and the optimal value of α leading to minimum average transition path time is discussed, in both the limits of friction. Langevin dynamics simulations corroborating with the analytical results are also presented.

  2. Application-Controlled Demand Paging for Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)

    1997-01-01

    In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.

  3. Computational analysis of particle reinforced viscoelastic polymer nanocomposites - statistical study of representative volume element

    NASA Astrophysics Data System (ADS)

    Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine

    2018-05-01

    The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.

  4. Highly Segmented Thermal Barrier Coatings Deposited by Suspension Plasma Spray: Effects of Spray Process on Microstructure

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolong; Honda, Hiroshi; Kuroda, Seiji; Araki, Hiroshi; Murakami, Hideyuki; Watanabe, Makoto; Sakka, Yoshio

    2016-12-01

    Effects of the ceramic powder size used for suspension as well as several processing parameters in suspension plasma spraying of YSZ were investigated experimentally, aiming to fabricate highly segmented microstructures for thermal barrier coating (TBC) applications. Particle image velocimetry (PIV) was used to observe the atomization process and the velocity distribution of atomized droplets and ceramic particles travelling toward the substrates. The tested parameters included the secondary plasma gas (He versus H2), suspension injection flow rate, and substrate surface roughness. Results indicated that a plasma jet with a relatively higher content of He or H2 as the secondary plasma gas was critical to produce highly segmented YSZ TBCs with a crack density up to 12 cracks/mm. The optimized suspension flow rate played an important role to realize coatings with a reduced porosity level and improved adhesion. An increased powder size and higher operation power level were beneficial for the formation of highly segmented coatings onto substrates with a wider range of surface roughness.

  5. 'Postconditioning' the human heart: multiple balloon inflations during primary angioplasty may confer cardioprotection.

    PubMed

    Darling, Chad E; Solari, Patrick B; Smith, Craig S; Furman, Mark I; Przyklenk, Karin

    2007-05-01

    Growing evidence from experimental models suggests that relief of myocardial ischemia in a stuttering manner (i.e., 'postconditioning' [PostC] with brief cycles of reperfusion-reocclusion) limits infarct size. However, the potential clinical efficacy of PostC has, to date,been largely unexplored. Using a retrospective study design, our aim was to test the hypothesis that creatine kinase release (CK: clinical surrogate of infarct size) would be attenuated in ST-segment elevation myocardial infarction (STEMI) patients requiring multiple balloon inflations-deflations during primary angioplasty versus STEMI patients who received minimal balloon inflations and/or direct stenting. To investigate this concept, we reviewed the records of all STEMI patients with single vessel occlusion who presented to our institution from November 2004 - April 2006 for primary angioplasty. Exclusion criteria were: previous MI, cardiogenic shock, patients resuscitated from cardiac arrest, or pre-infarct angina. Patients were prospectively divided into two subsets: those receiving 1-3 balloon inflations (considered the minimum range to achieve patency and stent placement) versus those in whom 4 or more inflations were applied. Peak CK release was significantly lower in patients requiring > or =4 versus 1-3 inflations (1655 versus 2272 IU/L; p<0.05), an outcome consistent with the concept that relief of sustained ischemia in a stuttered manner (analogous to postconditioning) may evoke cardioprotection in the clinical setting.

  6. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  7. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  8. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers

    PubMed Central

    Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572

  9. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research.

    PubMed

    Ababneh, Sufyan Y; Prescott, Jeff W; Gurcan, Metin N

    2011-08-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post-processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of 0.99 and an average segmentation accuracy of 0.95 using the Dice similarity index. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    PubMed

    He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.

  11. Local orientational mobility in regular hyperbranched polymers.

    PubMed

    Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas

    2016-07-01

    We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.

  12. Simulating the Structural Response of a Preloaded Bolted Joint

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.

    2008-01-01

    The present paper describes the structural analyses performed on a preloaded bolted-joint configuration. The joint modeled was comprised of two L-shaped structures connected together using a single bolt. Each L-shaped structure involved a vertical flat segment (or shell wall) welded to a horizontal segment (or flange). Parametric studies were performed using elasto-plastic, large-deformation nonlinear finite element analyses to determine the influence of several factors on the bolted-joint response. The factors considered included bolt preload, washer-surface-bearing size, edge boundary conditions, joint segment length, and loading history. Joint response is reported in terms of displacements, gap opening, and surface strains. Most of the factors studied were determined to have minimal effect on the bolted-joint response; however, the washer-bearing-surface size affected the response significantly.

  13. What Controls Subduction Earthquake Size and Occurrence?

    NASA Astrophysics Data System (ADS)

    Ruff, L. J.

    2008-12-01

    There is a long history of observational studies on the size and recurrence intervals of the large underthrusting earthquakes in subduction zones. In parallel with this documentation of the variability in both recurrence times and earthquake sizes -- both within and amongst subduction zones -- there have been numerous suggestions for what controls size and occurrence. In addition to the intrinsic scientific interest in these issues, there are direct applications to hazards mitigation. In this overview presentation, I review past progress, consider current paradigms, and look toward future studies that offer some resolution of long- standing questions. Given the definition of seismic moment, earthquake size is the product of overall static stress drop, down-dip fault width, and along-strike fault length. The long-standing consensus viewpoint is that for the largest earthquakes in a subduction zone: stress-drop is constant, fault width is the down-dip extent of the seismogenic portion of the plate boundary, but that along-strike fault length can vary from one large earthquake to the next. While there may be semi-permanent segments along a subduction zone, successive large earthquakes can rupture different combinations of segments. Many investigations emphasize the role of asperities within the segments, rather than segment edges. Thus, the question of earthquake size is translated into: "What controls the along-strike segmentation, and what determines which segments will rupture in a particular earthquake cycle?" There is no consensus response to these questions. Over the years, the suggestions for segmentation control include physical features in the subducted plate, physical features in the over-lying plate, and more obscure -- and possibly ever-changing -- properties of the plate interface such as the hydrologic conditions. It seems that the full global answer requires either some unforeseen breakthrough, or the long-term hard work of falsifying all candidate hypotheses except one. This falsification process requires both concentrated multidisciplinary efforts and patience. Large earthquake recurrence intervals in the same subduction zone segment display a significant, and therefore unfortunate, variability. Over the years, many of us have devised simple models to explain this variability. Of course, there are also more complicated explanations with many additional model parameters. While there has been important observational progress as both historical and paleo-seismological studies continue to add more data pairs of fault length and recurrence intervals, there has been a frustrating lack of progress in elimination of candidate models or processes that explain recurrence time variability. Some of the simple models for recurrence times offer a probabilistic or even deterministic prediction of future recurrence times - and have been used for hazards evaluation. It is important to know if these models are correct. Since we do not have the patience to wait for a strict statistical test, we must find other ways to test these ideas. For example, some of the simple deterministic models for along-strike segment interaction make predictions for variation in tectonic stress state that can be tested during the inter-seismic period. We have seen how some observational discoveries in the past decade (e.g., the episodic creep events down-dip of the seismogenic zone) give us additional insight into the physical processes in subduction zones; perhaps multi-disciplinary studies of subduction zones will discover a new way to reliably infer large-scale shear stresses on the plate interface?

  14. SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  15. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  17. Design and Fabrication of High Gain Multi-element Multi-segment Quarter-sector Cylindrical Dielectric Resonator Antenna

    NASA Astrophysics Data System (ADS)

    Ranjan, Pinku; Gangwar, Ravi Kumar

    2017-12-01

    A novel design and analysis of quarter cylindrical dielectric resonator antenna (q-CDRA) with multi-element and multi-segment (MEMS) approach has been presented. The MEMS q-CDRA has been designed by splitting four identical quarters from a solid cylinder and then multi-segmentation approach has been utilized to design q-CDRA. The proposed antenna has been designed for enhancement in bandwidth as well as for high gain. For bandwidth enhancement, multi-segmentation method has been explained for the selection of dielectric constant of materials. The performance of the proposed MEMS q-CDRA has been demonstrated with design guideline of MEMS approach. To validate the antenna performance, three segments q-CDRA has been fabricated and analyzed practically. The simulated results have been in good agreement with measured one. The MEMS q-CDRA has wide impedance bandwidth (|S11|≤-10 dB) of 133.8 % with monopole-like radiation pattern. The proposed MEMS q-CDRA has been operating at TM01δ mode with the measured gain of 6.65 dBi and minimum gain of 4.5 dBi in entire operating frequency band (5.1-13.7 GHz). The proposed MEMS q-CDRA may find appropriate applications in WiMAX and WLAN band.

  18. What is the best ST-segment recovery parameter to predict clinical outcome and myocardial infarct size? Amplitude, speed, and completeness of ST-segment recovery after primary percutaneous coronary intervention for ST-segment elevation myocardial infarction.

    PubMed

    Kuijt, Wichert J; Green, Cindy L; Verouden, Niels J W; Haeck, Joost D E; Tzivoni, Dan; Koch, Karel T; Stone, Gregg W; Lansky, Alexandra J; Broderick, Samuel; Tijssen, Jan G P; de Winter, Robbert J; Roe, Matthew T; Krucoff, Mitchell W

    ST-segment recovery (STR) is a strong mechanistic correlate of infarct size (IS) and outcome in ST-segment elevation myocardial infarction (STEMI). Characterizing measures of speed, amplitude, and completeness of STR may extend the use of this noninvasive biomarker. Core laboratory continuous 24-h 12-lead Holter ECG monitoring, IS by single-photon emission computed tomography (SPECT), and 30-day mortality of 2 clinical trials of primary percutaneous coronary intervention in STEMI were combined. Multiple ST measures (STR at last contrast injection (LC) measured from peak value; 30, 60, 90, 120, and 240min, residual deviation; time to steady ST recovery; and the 3-h area under the time trend curve [ST-AUC] from LC) were univariably correlated with IS and predictive of mortality. After multivariable adjustment for ST-parameters and GRACE risk factors, STR at 240min remained an additive predictor of mortality. Early STR, residual deviation, and ST-AUC remained associated with IS. Multiple parameters that quantify the speed, amplitude, and completeness of STR predict mortality and correlate with IS. Copyright © 2017. Published by Elsevier Inc.

  19. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  20. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  1. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  2. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  3. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    PubMed

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  4. 46 CFR 76.10-90 - Installations contracted for prior to May 26, 1965.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Not over Minimum number of pumps Minimum hose and hydrant size, inches Nozzle orifice size, inches Length of hose, feet 100 4,000 2 1 11/2 1 5/8 1 50 4,000 3 1 11/2 1 5/8 1 50 1 May use 50 feet of 21/2-inch hose with 7/8-inch nozzles for exterior stations. May use 75 feet of 11/2-inch hose with 5/8-inch...

  5. Scaling Relations for the Thermal Structure of Segmented Oceanic Transform Faults

    NASA Astrophysics Data System (ADS)

    Wolfson-Schwehr, M.; Boettcher, M. S.; Behn, M. D.

    2015-12-01

    Mid-ocean ridge-transform faults (RTFs) are a natural laboratory for studying strike-slip earthquake behavior due to their relatively simple geometry, well-constrained slip rates, and quasi-periodic seismic cycles. However, deficiencies in our understanding of the limited size of the largest RTF earthquakes are due, in part, to not considering the effect of short intra-transform spreading centers (ITSCs) on fault thermal structure. We use COMSOL Multiphysics to run a series of 3D finite element simulations of segmented RTFs with visco-plastic rheology. The models test a range of RTF segment lengths (L = 10-150 km), ITSC offset lengths (O = 1-30 km), and spreading rates (V = 2-14 cm/yr). The lithosphere and upper mantle are approximated as steady-state, incompressible flow. Coulomb failure incorporates brittle processes in the lithosphere, and a temperature-dependent flow law for dislocation creep of olivine activates ductile deformation in the mantle. ITSC offsets as small as 2 km affect the thermal structure underlying many segmented RTFs, reducing the area above the 600˚C isotherm, A600, and thus the size of the largest expected earthquakes, Mc. We develop a scaling relation for the critical ITSC offset length, OC, which significantly reduces the thermal affect of adjacent fault segments of length L1 and L2. OC is defined as the ITSC offset that results in an area loss ratio of R = (Aunbroken - Acombined)/Aunbroken - Adecoupled) = 63%, where Aunbroken = C600(L1+L2)1.5V-0.6 is A600 for an RTF of length L1 + L2; Adecoupled = C600(L11.5+L21.5)V-0.6 is the combined A600 of RTFs of lengths L1 and L2, respectively; and Acombined = Aunbroken exp(-O/ OC) + Adecoupled (1-exp(-O/ OC)). C600 is a constant. We use OC and kinematic fault parameters (L1, L2, O, and V) to develop a scaling relation for the approximate seismogenic area, Aseg, for each segment of a RTF system composed of two fault segments. Finally, we estimate the size of Mc on a fault segment based on Aseg. We show that small (<1 km) offsets in the fault trace observed between M­W6 rupture patches on Gofar and Discovery transform faults, located at ~4S on the East Pacific Rise, are not sufficient to thermally decouple adjacent fault patches. Thus additional factors, possibly including changes in fault zone material properties, must limit the size of Mc on these faults.

  6. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  7. Size and Base Composition of RNA in Supercoiled Plasmid DNA

    PubMed Central

    Williams, Peter H.; Boyer, Herbert W.; Helinski, Donald R.

    1973-01-01

    The average size and base composition of the covalently integrated RNA segment in supercoiled ColE1 DNA synthesized in Escherichia coli in the presence of chloramphenicol (CM-ColE1 DNA) have been determined by two independent methods. The two approaches yielded similar results, indicating that the RNA segment in CM-ColE1 DNA contains GMP at the 5′ end and comprises on the average 25 to 26 ribonucleotides with a base composition of 10-11 G, 3 A, 5-6 C, and 6-7 U. PMID:4359488

  8. Elevated serum uric acid affects myocardial reperfusion and infarct size in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention.

    PubMed

    Mandurino-Mirizzi, Alessandro; Crimi, Gabriele; Raineri, Claudia; Pica, Silvia; Ruffinazzi, Marta; Gianni, Umberto; Repetto, Alessandra; Ferlini, Marco; Marinoni, Barbara; Leonardi, Sergio; De Servi, Stefano; Oltrona Visconti, Luigi; De Ferrari, Gaetano M; Ferrario, Maurizio

    2018-05-01

    Elevated serum uric acid (eSUA) was associated with unfavorable outcome in patients with ST-segment elevation myocardial infarction (STEMI). However, the effect of eSUA on myocardial reperfusion injury and infarct size has been poorly investigated. Our aim was to correlate eSUA with infarct size, infarct size shrinkage, myocardial reperfusion grade and long-term mortality in STEMI patients undergoing primary percutaneous coronary intervention. We performed a post-hoc patients-level analysis of two randomized controlled trials, testing strategies for myocardial ischemia/reperfusion injury protection. Each patient underwent acute (3-5 days) and follow-up (4-6 months) cardiac magnetic resonance. Infarct size and infarct size shrinkage were outcomes of interest. We assessed T2-weighted edema, myocardial blush grade (MBG), corrected Thrombolysis in myocardial infarction Frame Count, ST-segment resolution and long-term all-cause mortality. A total of 101 (86.1% anterior) STEMI patients were included; eSUA was found in 16 (15.8%) patients. Infarct size was larger in eSUA compared with non-eSUA patients (42.3 ± 22 vs. 29.1 ± 15 ml, P = 0.008). After adjusting for covariates, infarct size was 10.3 ml (95% confidence interval 1.2-19.3 ml, P = 0.001) larger in eSUA. Among patients with anterior myocardial infarction the difference in delayed enhancement between groups was maintained (respectively, 42.3 ± 22.4 vs. 29.9 ± 15.4 ml, P = 0.015). Infarct size shrinkage was similar between the groups. Compared with non-eSUA, eSUA patients had larger T2-weighted edema (53.8 vs. 41.2 ml, P = 0.031) and less favorable MBG (MBG < 2: 44.4 vs. 13.6%, P = 0.045). Corrected Thrombolysis in myocardial infarction Frame Count and ST-segment resolution did not significantly differ between the groups. At a median follow-up of 7.3 years, all-cause mortality was higher in the eSUA group (18.8 vs. 2.4%, P = 0.028). eSUA may affect myocardial reperfusion in patients with STEMI undergoing percutaneous coronary intervention and is associated with larger infarct size and higher long-term mortality.

  9. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  10. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  11. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  12. Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23

    NASA Technical Reports Server (NTRS)

    Willson, Robert M.; Hathaway, David H.

    2004-01-01

    On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).

  13. No support for Heincke's law in hagfish (Myxinidae): lack of an association between body size and the depth of species occurrence.

    PubMed

    Schumacher, E L; Owens, B D; Uyeno, T A; Clark, A J; Reece, J S

    2017-08-01

    This study tests for interspecific evidence of Heincke's law among hagfishes and advances the field of research on body size and depth of occurrence in fishes by including a phylogenetic correction and by examining depth in four ways: maximum depth, minimum depth, mean depth of recorded specimens and the average of maximum and minimum depths of occurrence. Results yield no evidence for Heincke's law in hagfishes, no phylogenetic signal for the depth at which species occur, but moderate to weak phylogenetic signal for body size, suggesting that phylogeny may play a role in determining body size in this group. © 2017 The Fisheries Society of the British Isles.

  14. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  15. Medical image segmentation using 3D MRI data

    NASA Astrophysics Data System (ADS)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  16. Active hexagonally segmented mirror to investigate new optical phasing technologies for segmented telescopes.

    PubMed

    Gonté, Frédéric; Dupuy, Christophe; Luong, Bruno; Frank, Christoph; Brast, Roland; Sedghi, Baback

    2009-11-10

    The primary mirror of the future European Extremely Large Telescope will be equipped with 984 hexagonal segments. The alignment of the segments in piston, tip, and tilt within a few nanometers requires an optical phasing sensor. A test bench has been designed to study four different optical phasing sensor technologies. The core element of the test bench is an active segmented mirror composed of 61 flat hexagonal segments with a size of 17 mm side to side. Each of them can be controlled in piston, tip, and tilt by three piezoactuators with a precision better than 1 nm. The context of this development, the requirements, the design, and the integration of this system are explained. The first results on the final precision obtained in closed-loop control are also presented.

  17. Ground truth crop proportion summaries for US segments, 1976-1979

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.

    1981-01-01

    The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.

  18. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polan, D; Brady, S; Kaufman, R

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using anmore » anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT protocol parameters with an average DSC of 0.86 ± 0.04 (range: 0.80–0.99).« less

  19. Identification of QTLs for rice grain size using a novel set of chromosomal segment substitution lines derived from Yamadanishiki in the genetic background of Koshihikari

    PubMed Central

    Okada, Satoshi; Onogi, Akio; Iijima, Ken; Hori, Kiyosumi; Iwata, Hiroyoshi; Yokoyama, Wakana; Suehiro, Miki; Yamasaki, Masanori

    2018-01-01

    Grain size is important for brewing-rice cultivars, but the genetic basis for this trait is still unclear. This paper aims to identify QTLs for grain size using novel chromosomal segment substitution lines (CSSLs) harboring chromosomal segments from Yamadanishiki, an excellent sake-brewing rice, in the genetic background of Koshihikari, a cooking cultivar. We developed a set of 49 CSSLs. Grain length (GL), grain width (GWh), grain thickness (GT), 100-grain weight (GWt) and days to heading (DTH) were evaluated, and a CSSL-QTL analysis was conducted. Eighteen QTLs for grain size and DTH were identified. Seven (qGL11, qGWh5, qGWh10, qGWt6-2, qGWt10-2, qDTH3, and qDTH6) that were detected in F2 and recombinant inbred lines (RILs) from Koshihikari/Yamadanishiki were validated, suggesting that they are important for large grain size and heading date in Yamadanishiki. Additionally, QTL reanalysis for GWt showed that qGWt10-2 was only detected in early-flowering RILs, while qGWt5 (in the same region as qGWh5) was only detected in late-flowering RILs, suggesting that these QTLs show different responses to the environment. Our study revealed that grain size in the Yamadanishiki cultivar is determined by a complex genetic mechanism. These findings could be useful for the breeding of both cooking and brewing rice. PMID:29875604

  20. Particle size distribution of main-channel-bed sediments along the upper Mississippi River, USA

    USGS Publications Warehouse

    Remo, Jonathan; Heine, Ruben A.; Ickes, Brian

    2016-01-01

    In this study, we compared pre-lock-and-dam (ca. 1925) with a modern longitudinal survey of main-channel-bed sediments along a 740-km segment of the upper Mississippi River (UMR) between Davenport, IA, and Cairo, IL. This comparison was undertaken to gain a better understanding of how bed sediments are distributed longitudinally and to assess change since the completion of the UMR lock and dam navigation system and Missouri River dams (i.e., mid-twentieth century). The comparison of the historic and modern longitudinal bed sediment surveys showed similar bed sediment sizes and distributions along the study segment with the majority (> 90%) of bed sediment samples having a median diameter (D50) of fine to coarse sand. The fine tail (≤ D10) of the sediment size distributions was very fine to medium sand, and the coarse tail (≥ D90) of sediment-size distribution was coarse sand to gravel. Coarsest sediments in both surveys were found within or immediately downstream of bedrock-floored reaches. Statistical analysis revealed that the particle-size distributions between the survey samples were statistically identical, suggesting no overall difference in main-channel-bed sediment-size distribution between 1925 and present. This was a surprising result given the magnitude of river engineering undertaken along the study segment over the past ~ 90 years. The absence of substantial differences in main-channel-bed-sediment size suggests that flow competencies within the highly engineered navigation channel today are similar to conditions within the less-engineered historic channel.

  1. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  2. Correlated patterns in hydrothermal plume distribution and apparent magmatic budget along 2500 km of the Southeast Indian Ridge

    USGS Publications Warehouse

    Baker, Edward; Christophe Hémond,; Anne Briais,; Marcia Maia,; Scheirer, Daniel S.; Sharon L. Walker,; Tingting Wang,; Yongshun John Chen,

    2014-01-01

    Multiple geological processes affect the distribution of hydrothermal venting along a mid-ocean ridge. Deciphering the role of a specific process is often frustrated by simultaneous changes in other influences. Here we take advantage of the almost constant spreading rate (65–71 mm/yr) along 2500 km of the Southeast Indian Ridge (SEIR) between 77°E and 99°E to examine the spatial density of hydrothermal venting relative to regional and segment-scale changes in the apparent magmatic budget. We use 227 vertical profiles of light backscatter and (on 41 profiles) oxidation-reduction potential along 27 first and second-order ridge segments on and adjacent to the Amsterdam-St. Paul (ASP) Plateau to map ph, the fraction of casts detecting a plume. At the regional scale, venting on the five segments crossing the magma-thickened hot spot plateau is almost entirely suppressed (ph = 0.02). Conversely, the combined ph (0.34) from all other segments follows the global trend of ph versus spreading rate. Off the ASP Plateau, multisegment trends in ph track trends in the regional axial depth, high where regional depth increases and low where it decreases. At the individual segment scale, a robust correlation between ph and cross-axis inflation for first-order segments shows that different magmatic budgets among first-order segments are expressed as different levels of hydrothermal spatial density. This correlation is absent among second-order segments. Eighty-five percent of the plumes occur in eight clusters totaling ∼350 km. We hypothesize that these clusters are a minimum estimate of the length of axial melt lenses underlying this section of the SEIR.

  3. Correlated patterns in hydrothermal plume distribution and apparent magmatic budget along 2500 km of the Southeast Indian Ridge

    NASA Astrophysics Data System (ADS)

    Baker, Edward T.; Hémond, Christophe; Briais, Anne; Maia, Marcia; Scheirer, Daniel S.; Walker, Sharon L.; Wang, Tingting; Chen, Yongshun John

    2014-08-01

    Multiple geological processes affect the distribution of hydrothermal venting along a mid-ocean ridge. Deciphering the role of a specific process is often frustrated by simultaneous changes in other influences. Here we take advantage of the almost constant spreading rate (65-71 mm/yr) along 2500 km of the Southeast Indian Ridge (SEIR) between 77°E and 99°E to examine the spatial density of hydrothermal venting relative to regional and segment-scale changes in the apparent magmatic budget. We use 227 vertical profiles of light backscatter and (on 41 profiles) oxidation-reduction potential along 27 first and second-order ridge segments on and adjacent to the Amsterdam-St. Paul (ASP) Plateau to map ph, the fraction of casts detecting a plume. At the regional scale, venting on the five segments crossing the magma-thickened hot spot plateau is almost entirely suppressed (ph = 0.02). Conversely, the combined ph (0.34) from all other segments follows the global trend of ph versus spreading rate. Off the ASP Plateau, multisegment trends in ph track trends in the regional axial depth, high where regional depth increases and low where it decreases. At the individual segment scale, a robust correlation between ph and cross-axis inflation for first-order segments shows that different magmatic budgets among first-order segments are expressed as different levels of hydrothermal spatial density. This correlation is absent among second-order segments. Eighty-five percent of the plumes occur in eight clusters totaling ˜350 km. We hypothesize that these clusters are a minimum estimate of the length of axial melt lenses underlying this section of the SEIR.

  4. Atomistic modeling of dropwise condensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikarwar, B. S., E-mail: bssikarwar@amity.edu; Singh, P. L.; Muralidhar, K.

    The basic aim of the atomistic modeling of condensation of water is to determine the size of the stable cluster and connect phenomena occurring at atomic scale to the macroscale. In this paper, a population balance model is described in terms of the rate equations to obtain the number density distribution of the resulting clusters. The residence time is taken to be large enough so that sufficient time is available for all the adatoms existing in vapor-phase to loose their latent heat and get condensed. The simulation assumes clusters of a given size to be formed from clusters of smallermore » sizes, but not by the disintegration of the larger clusters. The largest stable cluster size in the number density distribution is taken to be representative of the minimum drop radius formed in a dropwise condensation process. A numerical confirmation of this result against predictions based on a thermodynamic model has been obtained. Results show that the number density distribution is sensitive to the surface diffusion coefficient and the rate of vapor flux impinging on the substrate. The minimum drop radius increases with the diffusion coefficient and the impinging vapor flux; however, the dependence is weak. The minimum drop radius predicted from thermodynamic considerations matches the prediction of the cluster model, though the former does not take into account the effect of the surface properties on the nucleation phenomena. For a chemically passive surface, the diffusion coefficient and the residence time are dependent on the surface texture via the coefficient of friction. Thus, physical texturing provides a means of changing, within limits, the minimum drop radius. The study reveals that surface texturing at the scale of the minimum drop radius does not provide controllability of the macro-scale dropwise condensation at large timescales when a dynamic steady-state is reached.« less

  5. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  6. Cortical bone fracture analysis using XFEM - case study.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2017-04-01

    We aim to achieve an accurate simulation of human cortical bone fracture using the extended finite element method within a commercial finite element software abaqus. A two-dimensional unit cell model of cortical bone is built based on a microscopy image of the mid-diaphysis of tibia of a 70-year-old human male donor. Each phase of this model, an interstitial bone, a cement line, and an osteon, are considered linear elastic and isotropic with material properties obtained by nanoindentation, taken from literature. The effect of using fracture analysis methods (cohesive segment approach versus linear elastic fracture mechanics approach), finite element type, and boundary conditions (traction, displacement, and mixed) on cortical bone crack initiation and propagation are studied. In this study cohesive segment damage evolution for a traction separation law based on energy and displacement is used. In addition, effects of the increment size and mesh density on analysis results are investigated. We find that both cohesive segment and linear elastic fracture mechanics approaches within the extended finite element method can effectively simulate cortical bone fracture. Mesh density and simulation increment size can influence analysis results when employing either approach, and using finer mesh and/or smaller increment size does not always provide more accurate results. Both approaches provide close but not identical results, and crack propagation speed is found to be slower when using the cohesive segment approach. Also, using reduced integration elements along with the cohesive segment approach decreases crack propagation speed compared with using full integration elements. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Testosterone Delivered with a Scaffold Is as Effective as Bone Morphologic Protein-2 in Promoting the Repair of Critical-Size Segmental Defect of Femoral Bone in Mice

    PubMed Central

    Cheng, Bi-Hua; Chu, Tien-Min G.; Chang, Chawnshang; Kang, Hong-Yo; Huang, Ko-En

    2013-01-01

    Loss of large bone segments due to fracture resulting from trauma or tumor removal is a common clinical problem. The goal of this study was to evaluate the use of scaffolds containing testosterone, bone morphogenetic protein-2 (BMP-2), or a combination of both for treatment of critical-size segmental bone defects in mice. A 2.5-mm wide osteotomy was created on the left femur of wildtype and androgen receptor knockout (ARKO) mice. Testosterone, BMP-2, or both were delivered locally using a scaffold that bridged the fracture. Results of X-ray imaging showed that in both wildtype and ARKO mice, BMP-2 treatment induced callus formation within 14 days after initiation of the treatment. Testosterone treatment also induced callus formation within 14 days in wildtype but not in ARKO mice. Micro-computed tomography and histological examinations revealed that testosterone treatment caused similar degrees of callus formation as BMP-2 treatment in wildtype mice, but had no such effect in ARKO mice, suggesting that the androgen receptor is required for testosterone to initiate fracture healing. These results demonstrate that testosterone is as effective as BMP-2 in promoting the healing of critical-size segmental defects and that combination therapy with testosterone and BMP-2 is superior to single therapy. Results of this study may provide a foundation to develop a cost effective and efficient therapeutic modality for treatment of bone fractures with segmental defects. PMID:23940550

  8. Asteroid Crew Segment Mission Lean Development

    NASA Technical Reports Server (NTRS)

    Gard, Joseph; McDonald, Mark

    2014-01-01

    Asteroid Retrieval Crewed Mission (ARCM) requires a minimum set of Key Capabilities compared in the context of the baseline EM-1/2 Orion and SLS capabilities. These include: Life Support & Human Systems Capabilities; Mission Kit Capabilities; Minimizing the impact to the Orion and SLS development schedules and funding. Leveraging existing technology development efforts to develop the kits adds functionality to Orion while minimizing cost and mass impact.

  9. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  11. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  12. Effects of Pore Size on the Osteoconductivity and Mechanical Properties of Calcium Phosphate Cement in a Rabbit Model.

    PubMed

    Zhao, Yi-Nan; Fan, Jun-Jun; Li, Zhi-Quan; Liu, Yan-Wu; Wu, Yao-Ping; Liu, Jian

    2017-02-01

    Calcium phosphate cement (CPC) porous scaffold is widely used as a suitable bone substitute to repair bone defect, but the optimal pore size is unclear yet. The current study aimed to evaluate the effect of different pore sizes on the processing of bone formation in repairing segmental bone defect of rabbits using CPC porous scaffolds. Three kinds of CPC porous scaffolds with 5 mm diameters and 12 mm length were prepared with the same porosity but different pore sizes (Group A: 200-300 µm, Group B: 300-450 µm, Group C: 450-600 µm, respectively). Twelve millimeter segmental bone defects were created in the middle of the radius bone and filled with different kinds of CPC cylindrical scaffolds. After 4, 12, and 24 weeks, alkaline phosphatase (ALP), histological assessment, and mechanical properties evaluation were performed in all three groups. After 4 weeks, ALP activity increased in all groups but was highest in Group A with smallest pore size. The new bone formation within the scaffolds was not obvious in all groups. After 12 weeks, the new bone formation within the scaffolds was obvious in each group and highest in Group A. At 24 weeks, no significant difference in new bone formation was observed among different groups. Besides the osteoconductive effect, Group A with smallest pore size also had the best mechanical properties in vivo at 12 weeks. We demonstrate that pore size has a significant effect on the osteoconductivity and mechanical properties of calcium phosphate cement porous scaffold in vivo. Small pore size favors the bone formation in the early stage and may be more suitable for repairing segmental bone defect in vivo. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smitherman, C; Chen, B; Samei, E

    2014-06-15

    Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less

  14. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  15. 50 CFR 622.56 - Size limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...

  16. 50 CFR 622.56 - Size limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...

  17. 7 CFR 51.1859 - Size.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... cherry tomatoes and Pyriforme type tomatoes commonly referred to as pear shaped tomatoes, and other... Standards for Fresh Tomatoes 1 Size § 51.1859 Size. (a) The size of tomatoes packed in any standard type... measurement for minimum diameter shall be the largest diameter of the tomato measured at right angles to a...

  18. 7 CFR 51.1859 - Size.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... cherry tomatoes and Pyriforme type tomatoes commonly referred to as pear shaped tomatoes, and other... Standards for Fresh Tomatoes 1 Size § 51.1859 Size. (a) The size of tomatoes packed in any standard type... measurement for minimum diameter shall be the largest diameter of the tomato measured at right angles to a...

  19. Fabrication, testing and modeling of a new flexible armor inspired from natural fish scales and osteoderms.

    PubMed

    Chintapalli, Ravi Kiran; Mirkhalaf, Mohammad; Dastjerdi, Ahmad Khayer; Barthelat, Francois

    2014-09-01

    Crocodiles, armadillo, turtles, fish and many other animal species have evolved flexible armored skins in the form of hard scales or osteoderms, which can be described as hard plates of finite size embedded in softer tissues. The individual hard segments provide protection from predators, while the relative motion of these segments provides the flexibility required for efficient locomotion. In this work, we duplicated these broad concepts in a bio-inspired segmented armor. Hexagonal segments of well-defined size and shape were carved within a thin glass plate using laser engraving. The engraved plate was then placed on a soft substrate which simulated soft tissues, and then punctured with a sharp needle mounted on a miniature loading stage. The resistance of our segmented armor was significantly higher when smaller hexagons were used, and our bio-inspired segmented glass displayed an increase in puncture resistance of up to 70% compared to a continuous plate of glass of the same thickness. Detailed structural analyses aided by finite elements revealed that this extraordinary improvement is due to the reduced span of individual segments, which decreases flexural stresses and delays fracture. This effect can however only be achieved if the plates are at least 1000 stiffer than the underlying substrate, which is the case for natural armor systems. Our bio-inspired system also displayed many of the attributes of natural armors: flexible, robust with 'multi-hit' capabilities. This new segmented glass therefore suggests interesting bio-inspired strategies and mechanisms which could be systematically exploited in high-performance flexible armors. This study also provides new insights and a better understanding of the mechanics of natural armors such as scales and osteoderms.

  20. A segmentation approach for a delineation of terrestrial ecoregions

    NASA Astrophysics Data System (ADS)

    Nowosad, J.; Stepinski, T.

    2017-12-01

    Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.

  1. Lack of size selectivity for paddlefish captured in hobbled gillnets

    USGS Publications Warehouse

    Scholten, G.D.; Bettoli, P.W.

    2007-01-01

    A commercial fishery for paddlefish Polyodon spathula caviar exists in Kentucky Lake, a reservoir on the lower Tennessee River. A 152-mm (bar-measure) minimum mesh size restriction on entanglement gear was enacted in 2002 and the minimum size limit was increased to 864 mm eye-fork length to reduce the possibility of recruitment overfishing. Paddlefish were sampled in 2003-2004 using experimental monofilament gillnets with panels of 89, 102, 127, 152, 178, and 203-mm meshes and the efficacy of the mesh size restriction was evaluated. Following the standards of commercial gear used in that fishery, nets were "hobbled" (i.e., 128 m ?? 3.6 m nets were tied down to 2.4 m; 91 m ?? 9.1 m nets were tied down to 7.6 m). The mean lengths of paddlefish (Ntotal = 576 fish) captured in each mesh were similar among most meshes and bycatch rates of sublegal fish did not vary with mesh size. Selectivity curves could not be modeled because the mean and modal lengths of fish captured in each mesh did not increase with mesh size. Ratios of fish girth to mesh perimeter (G:P) for individual fish were often less than 1.0 as a result of the largest meshes capturing small paddlefish. It is unclear whether lack of size selectivity for paddlefish was because the gillnets were hobbled, the unique morphology of paddlefish, or the fact that they swim with their mouths agape when filter feeding. The lack of size selectivity by hobbled gillnets fished in Kentucky Lake means that managers cannot influence the size of paddlefish captured by commercial gillnet gear by changing minimum mesh size regulations. ?? 2006 Elsevier B.V. All rights reserved.

  2. Feasibility and scalability of spring parameters in distraction enterogenesis in a murine model.

    PubMed

    Huynh, Nhan; Dubrovsky, Genia; Rouch, Joshua D; Scott, Andrew; Stelzner, Matthias; Shekherdimian, Shant; Dunn, James C Y

    2017-07-01

    Distraction enterogenesis has been investigated as a novel treatment for short bowel syndrome (SBS). With variable intestinal sizes, it is critical to determine safe, translatable spring characteristics in differently sized animal models before clinical use. Nitinol springs have been shown to lengthen intestines in rats and pigs. Here, we show spring-mediated intestinal lengthening is scalable and feasible in a murine model. A 10-mm nitinol spring was compressed to 3 mm and placed in a 5-mm intestinal segment isolated from continuity in mice. A noncompressed spring placed in a similar fashion served as a control. Spring parameters were proportionally extrapolated from previous spring parameters to accommodate the smaller size of murine intestines. After 2-3 wk, the intestinal segments were examined for size and histology. Experimental group with spring constants, k = 0.2-1.4 N/m, showed intestinal lengthening from 5.0 ± 0.6 mm to 9.5 ± 0.8 mm (P < 0.0001), whereas control segments lengthened from 5.3 ± 0.5 mm to 6.4 ± 1.0 mm (P < 0.02). Diameter increased similarly in both groups. Isolated segment perforation was noted when k ≥ 0.8 N/m. Histologically, lengthened segments had increased muscularis thickness and crypt depth in comparison to normal intestine. Nitinol springs with k ≤ 0.4 N/m can safely yield nearly 2-fold distraction enterogenesis in length and diameter in a scalable mouse model. Not only does this study derive the safe ranges and translatable spring characteristics in a scalable murine model for patients with short bowel syndrome, it also demonstrates the feasibility of spring-mediated intestinal lengthening in a mouse, which can be used to study underlying mechanisms in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  4. Jet transport energy management for minimum fuel consumption and noise impact in the terminal area

    NASA Technical Reports Server (NTRS)

    Bull, J. S.; Foster, J. D.

    1974-01-01

    Significant reductions in both noise and fuel consumption can be gained through careful tailoring of approach flightpath and airspeed profile, and the point at which the landing gear and flaps are lowered. For example, the noise problem has been successfully attacked in recent years with development of the 'two-segment' approach, which brings the aircraft in at a steeper angle initially, thereby achieving noise reduction through lower thrust settings and higher altitudes. A further reduction in noise and a significant reduction in fuel consumption can be achieved with the 'decelerating approach' concept. In this case, the approach is initiated at high airspeed and in a drag configuration that allows for low thrust. The landing flaps are then lowered at the appropriate time so that the airspeed slowly decelerates to V sub r at touchdown. The decelerating approach concept can be applied to constant glideslope flightpaths or segmented flightpaths such as the two-segment approach.

  5. Calculation of Appropriate Minimum Size of Isolation Rooms based on Questionnaire Survey of Experts and Analysis on Conditions of Isolation Room Use

    NASA Astrophysics Data System (ADS)

    Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha

    2017-07-01

    After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.

  6. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  7. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  8. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  9. Thermal image analysis using the serpentine method

    NASA Astrophysics Data System (ADS)

    Koprowski, Robert; Wilczyński, Sławomir

    2018-03-01

    Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.

  10. Boundary control by displacement at one end of a string and the integral condition on the other

    NASA Astrophysics Data System (ADS)

    Attaev, Anatoly Kh.

    2017-09-01

    For a one-dimensional wave equation we study the problem of finding such boundary controls that makes a string move from an arbitrary specified initial state to an arbitrary specified final state. The control is applied at the left end of the string while the nonlocal displacement is at the right end. Necessary and sufficient conditions are established for the functions determining the initial and final state of the string. An explicit analytical form of the boundary control is obtained as well as the minimum time T = l for this control. In case when T = l - ɛ, 0 < ɛ < l, i.e. T < l it is shown the initial values u(x, 0) = ϕ(x) and ut (x, 0) = ψ(x) cannot be set arbitrary. Moreover, if ɛ < l/2, hence the functions ϕ(x) and ψ(x) are linearly dependent on any segment of finite length either in the segment [0, ɛ], or in [l-ɛ, l]. Suppose ɛ ≥ l/2, then functions ϕ(x) and ψ(x) are linearly dependent on any segment of finite length in the segment [0, l].

  11. Auto detection and segmentation of physical activities during a Timed-Up-and-Go (TUG) task in healthy older adults using multiple inertial sensors.

    PubMed

    Nguyen, Hung P; Ayachi, Fouaz; Lavigne-Pelletier, Catherine; Blamoutier, Margaux; Rahimi, Fariborz; Boissy, Patrick; Jog, Mandar; Duval, Christian

    2015-04-11

    Recently, much attention has been given to the use of inertial sensors for remote monitoring of individuals with limited mobility. However, the focus has been mostly on the detection of symptoms, not specific activities. The objective of the present study was to develop an automated recognition and segmentation algorithm based on inertial sensor data to identify common gross motor patterns during activity of daily living. A modified Time-Up-And-Go (TUG) task was used since it is comprised of four common daily living activities; Standing, Walking, Turning, and Sitting, all performed in a continuous fashion resulting in six different segments during the task. Sixteen healthy older adults performed two trials of a 5 and 10 meter TUG task. They were outfitted with 17 inertial motion sensors covering each body segment. Data from the 10 meter TUG were used to identify pertinent sensors on the trunk, head, hip, knee, and thigh that provided suitable data for detecting and segmenting activities associated with the TUG. Raw data from sensors were detrended to remove sensor drift, normalized, and band pass filtered with optimal frequencies to reveal kinematic peaks that corresponded to different activities. Segmentation was accomplished by identifying the time stamps of the first minimum or maximum to the right and the left of these peaks. Segmentation time stamps were compared to results from two examiners visually segmenting the activities of the TUG. We were able to detect these activities in a TUG with 100% sensitivity and specificity (n = 192) during the 10 meter TUG. The rate of success was subsequently confirmed in the 5 meter TUG (n = 192) without altering the parameters of the algorithm. When applying the segmentation algorithms to the 10 meter TUG, we were able to parse 100% of the transition points (n = 224) between different segments that were as reliable and less variable than visual segmentation performed by two independent examiners. The present study lays the foundation for the development of a comprehensive algorithm to detect and segment naturalistic activities using inertial sensors, in hope of evaluating automatically motor performance within the detected tasks.

  12. Arabic OCR: toward a complete system

    NASA Astrophysics Data System (ADS)

    El-Bialy, Ahmed M.; Kandil, Ahmed H.; Hashish, Mohamed; Yamany, Sameh M.

    1999-12-01

    Latin and Chinese OCR systems have been studied extensively in the literature. Yet little work was performed for Arabic character recognition. This is due to the technical challenges found in the Arabic text. Due to its cursive nature, a powerful and stable text segmentation is needed. Also; features capturing the characteristics of the rich Arabic character representation are needed to build the Arabic OCR. In this paper a novel segmentation technique which is font and size independent is introduced. This technique can segment the cursive written text line even if the line suffers from small skewness. The technique is not sensitive to the location of the centerline of the text line and can segment different font sizes and type (for different character sets) occurring on the same line. Features extraction is considered one of the most important phases of the text reading system. Ideally, the features extracted from a character image should capture the essential characteristics of this character that are independent of the font type and size. In such ideal case, the classifier stores a single prototype per character. However, it is practically challenging to find such ideal set of features. In this paper, a set of features that reflect the topological aspects of Arabia characters is proposed. These proposed features integrated with a topological matching technique introduce an Arabic text reading system that is semi Omni.

  13. Local site preference rationalizes disentangling by DNA topoisomerases

    NASA Astrophysics Data System (ADS)

    Liu, Zhirong; Zechiedrich, Lynn; Chan, Hue Sun

    2010-03-01

    To rationalize the disentangling action of type II topoisomerases, an improved wormlike DNA model was used to delineate the degree of unknotting and decatenating achievable by selective segment passage at specific juxtaposition geometries and to determine how these activities were affected by DNA circle size and solution ionic strength. We found that segment passage at hooked geometries can reduce knot populations as dramatically as seen in experiments. Selective segment passage also provided theoretical underpinning for an intriguing empirical scaling relation between unknotting and decatenating potentials.

  14. Cigarette brand diversity and price changes during the implementation of plain packaging in the United Kingdom.

    PubMed

    Breton, Magdalena Opazo; Britton, John; Huang, Yue; Bogdanovica, Ilze

    2018-05-29

    Plain packaging of cigarettes appeared in the UK in July 2016 and was ubiquitous by May 2017. The change coincided with another legislative change, raising the minimum pack size from 10 to 20 cigarettes. Laws imposing plain packaging on cigarette packs remove another promotional route from tobacco companies but the effect of such laws on brand diversity, pricing, and sales volume is unknown. This study aimed to 1) describe and quantify changes in brand diversity, price segmentation and sales volumes and 2) estimate the association between the introduction of plain cigarette packaging and cigarette pricing in the UK. We used a natural experiment design to assess the impact of plain packaging legislation on brand diversity and cigarette prices. The data comprised a sample of 76% of sales of cigarettes in the UK between March 2013 and June 2017. United Kingdom MEASUREMENTS: Cigarette prices, number of brands and products, volumes of sales FINDINGS: During the period analysed, there was a slight decrease in the number of cigarette brands. There was also an initial increase observed in the number of cigarette products, mainly due to an increase in the number of products in packs of fewer than 20 cigarettes sold before July 2016, which was then followed by a rapid decrease in the number of products that coincided with the implementation of the new legislation. Cigarette sales volumes during this period did not deviate from the preceding secular trend, but prices rose substantially. Regression results showed that price per cigarette, regardless of pack size, was 5.0 (95% CI 4.8 to 5.3) pence higher in plain than in fully branded packs. For packs of 20 cigarettes, price increases were greater in the lower price quintiles, ranging from 2.6 (95% CI 2.4 to 2.7) GBP in the lowest to 0.3 (95% CI 0.3-0.4) GBP per pack in the highest quintile. The implementation of standardised packaging legislation in the UK, which included minimum pack sizes of 20, was associated with significant increases overall in the price of manufactured cigarettes but no clear deviation in the ongoing downward trend in total volume of cigarette sales. This article is protected by copyright. All rights reserved.

  15. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  16. Optical coherence tomography compared with intravascular ultrasound and with angiography to guide coronary stent implantation (ILUMIEN III: OPTIMIZE PCI): a randomised controlled trial.

    PubMed

    Ali, Ziad A; Maehara, Akiko; Généreux, Philippe; Shlofmitz, Richard A; Fabbiocchi, Franco; Nazif, Tamim M; Guagliumi, Giulio; Meraj, Perwaiz M; Alfonso, Fernando; Samady, Habib; Akasaka, Takashi; Carlson, Eric B; Leesar, Massoud A; Matsumura, Mitsuaki; Ozan, Melek Ozgu; Mintz, Gary S; Ben-Yehuda, Ori; Stone, Gregg W

    2016-11-26

    Percutaneous coronary intervention (PCI) is most commonly guided by angiography alone. Intravascular ultrasound (IVUS) guidance has been shown to reduce major adverse cardiovascular events (MACE) after PCI, principally by resulting in a larger postprocedure lumen than with angiographic guidance. Optical coherence tomography (OCT) provides higher resolution imaging than does IVUS, although findings from some studies suggest that it might lead to smaller luminal diameters after stent implantation. We sought to establish whether or not a novel OCT-based stent sizing strategy would result in a minimum stent area similar to or better than that achieved with IVUS guidance and better than that achieved with angiography guidance alone. In this randomised controlled trial, we recruited patients aged 18 years or older undergoing PCI from 29 hospitals in eight countries. Eligible patients had one or more target lesions located in a native coronary artery with a visually estimated reference vessel diameter of 2·25-3·50 mm and a length of less than 40 mm. We excluded patients with left main or ostial right coronary artery stenoses, bypass graft stenoses, chronic total occlusions, planned two-stent bifurcations, and in-stent restenosis. Participants were randomly assigned (1:1:1; with use of an interactive web-based system in block sizes of three, stratified by site) to OCT guidance, IVUS guidance, or angiography-guided stent implantation. We did OCT-guided PCI using a specific protocol to establish stent length, diameter, and expansion according to reference segment external elastic lamina measurements. All patients underwent final OCT imaging (operators in the IVUS and angiography groups were masked to the OCT images). The primary efficacy endpoint was post-PCI minimum stent area, measured by OCT at a masked independent core laboratory at completion of enrolment, in all randomly allocated participants who had primary outcome data. The primary safety endpoint was procedural MACE. We tested non-inferiority of OCT guidance to IVUS guidance (with a non-inferiority margin of 1·0 mm 2 ), superiority of OCT guidance to angiography guidance, and superiority of OCT guidance to IVUS guidance, in a hierarchical manner. This trial is registered with ClinicalTrials.gov, number NCT02471586. Between May 13, 2015, and April 5, 2016, we randomly allocated 450 patients (158 [35%] to OCT, 146 [32%] to IVUS, and 146 [32%] to angiography), with 415 final OCT acquisitions analysed for the primary endpoint (140 [34%] in the OCT group, 135 [33%] in the IVUS group, and 140 [34%] in the angiography group). The final median minimum stent area was 5·79 mm 2 (IQR 4·54-7·34) with OCT guidance, 5·89 mm 2 (4·67-7·80) with IVUS guidance, and 5·49 mm 2 (4·39-6·59) with angiography guidance. OCT guidance was non-inferior to IVUS guidance (one-sided 97·5% lower CI -0·70 mm 2 ; p=0·001), but not superior (p=0·42). OCT guidance was also not superior to angiography guidance (p=0·12). We noted procedural MACE in four (3%) of 158 patients in the OCT group, one (1%) of 146 in the IVUS group, and one (1%) of 146 in the angiography group (OCT vs IVUS p=0·37; OCT vs angiography p=0·37). OCT-guided PCI using a specific reference segment external elastic lamina-based stent optimisation strategy was safe and resulted in similar minimum stent area to that of IVUS-guided PCI. These data warrant a large-scale randomised trial to establish whether or not OCT guidance results in superior clinical outcomes to angiography guidance. St Jude Medical. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Well known outstanding geoid and relief depressions as regular wave woven features on Eartg (Indian geoid minimum), Moon (SPA basin), Phobos (Stickney crater), and Miranda (an ovoid).

    NASA Astrophysics Data System (ADS)

    Kochemasov, Gennady G.

    2010-05-01

    A very unreliable interpretation of the deepest and large depressions on the Moon and Phobos as the impact features is not synonymous and causes many questions. A real scientific understanding of their origin should take into consideration a fact of their similar tectonic position with that of a comparable depression on so different by size, composition, and density heavenly body as Earth. On Earth as on other celestial bodies there is a fundamental division on two segments - hemispheres produced by an interference of standing warping wave 1 (long 2πR) of four directions [1]. One hemisphere is uplifted (continental, highlands) and the opposite subsided (oceanic, lowlands). Tectonic features made by wave 2 (sectors) adorn this fundamental structure. Thus, on the continental risen segment appear regularly disposed sectors, also uplifted and subsided. On the Earth's eastern continental hemisphere they are grouped around the Pamirs-Hindukush vertex of the structural octahedron made by interfering waves2. Two risen sectors (highly uplifted African and the opposite uplifted Asian) are separated by two fallen sectors (subsided Eurasian and the opposite deeply subsided Indoceanic). The Indoceanic sector with superposed on it subsided Indian tectonic granule (πR/4-structure) produce the deepest geoid minimum of Earth (-112 m). The Moon demonstrates its own geoid minimum of the same relative size and in the similar sectoral tectonic position - the SPA basin [2, 3]. This basin represents a deeply subsided sector of the sectoral structure around the Mare Orientale (one of vertices of the lunar structural octahedron). To this Mare converge four sectors: two subsided - SPA basin and the opposite Procellarum Ocean, and two uplifted - we call them the "Africanda sector" and the opposite "Antiafricanda one" to stress structural similarity with Earth [2]. The highest "Africanda sector" is built with light anorthosites; enrichment with Na makes them even less dense that is required by the sector highest elevation. Procellarum Ocean is filled with basalts and Ti-basalts. The SPA basin must be filled with even denser rocks. One expects here feldspar-free, pyroxene enriched rocks with some admixture of Fe metal and troilite. The spectral observations of Carle Pieters [4] confirm orthopyroxene enrichment and absence of feldspar. Enigmatic large and deep depression of crater Stickney on Phobos with an appropriate scale adjustment to much larger Earth and Moon occupies a similar structural position to the Indian geoid minimum and the SPA basin. Such situation cannot be random and proves a common origin of these remarkable tectonic features at so different celestial bodies. This conclusion is reinforced by taking for a comparison another small heavenly body- Uranus satellite Miranda. Imaged by Voyager 2 spacecraft in 1986 it shows two kinds of terrains (PIA01980 & others). Subsided provinces (ovoids) characterized by intensive curvilinear folding and faulting interrupt uplifted densely cratered old provinces. One of the deeply subsided ovoids with curvilinear folds pattern (compression under subsidence) perfectly fits into a sector boundary. References: [1] Kochemasov G. (1999) Theorems of wave planetary tectonics // Geophys. Res. Abstr., V.1, #3, 700. [2] Kochemasov G.G. (1998) The Moon: Earth-type sectoral tectonics, relief and relevant chemical features // The 3rd International Confernce on Exploration and Utilization of the Moon, Oct. 11-14, 1998, Moscow, Russia, Abstracts, p. 29. [3] Kochemasov G.G. (1998) Moon-Earth: similarity of sectoral organization // 32nd COSPAR Scientific Assembly, Nagoya, Japan, 12-19 July 1998, Abstracts, p. 77. [4] Pieters C. (1997) Annales Geophys., v. 15, pt. III, p. 792.

  18. ZResponse to selection, heritability and genetic correlations between body weight and body size in Pacific white shrimp, Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao; Xiang, Jianhai

    2012-03-01

    To quantify the response to selection, heritability and genetic correlations between weight and size of Litopenaeus vannamei, the body weight (BW), total length (TL), body length (BL), first abdominal segment depth (FASD), third abdominal segment depth (TASD), first abdominal segment width (FASW), and partial carapace length (PCL) of 5-month-old parents and of offspnng were measured by calculating seven body measunngs of offspnng produced by a nested mating design. Seventeen half-sib families and 42 full-sib families of L. vannamei were produced using artificial fertilization from 2-4 dams by each sire, and measured at around five months post-metamorphosis. The results show that hentabilities among vanous traits were high: 0.515±0.030 for body weight and 0.394±0.030 for total length. After one generation of selection. the selection response was 10.70% for offspring growth. In the 5th month, the realized heritability for weight was 0.296 for the offspnng generation. Genetic correlations between body weight and body size were highly variable. The results indicate that external morphological parameters can be applied dunng breeder selection for enhancing the growth without sacrificing animals for determining the body size and breed ability; and selective breeding can be improved significantly, simultaneously with increased production.

  19. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers.

    PubMed

    Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai

    2015-06-30

    Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.

  20. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  1. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  2. Video segmentation using keywords

    NASA Astrophysics Data System (ADS)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  3. Tensor scale-based fuzzy connectedness image segmentation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    2003-05-01

    Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.

  4. Size-dependent trophic patterns of pallid sturgeon and shovelnose sturgeon in a large river system

    USGS Publications Warehouse

    French, William E.; Graeb, Brian D. S.; Bertrand, Katie N.; Chipps, Steven R.; Klumb, Robert A.

    2013-01-01

    This study compared patterns of δ15N and δ13C enrichment of pallid sturgeon Scaphirhynchus albus and shovelnose sturgeon S. platorynchus in the Missouri River, United States, to infer their trophic position in a large river system. We examined enrichment and energy flow for pallid sturgeon in three segments of the Missouri River (Montana/North Dakota, Nebraska/South Dakota, and Nebraska/Iowa) and made comparisons between species in the two downstream segments (Nebraska/South Dakota and Nebraska/Iowa). Patterns in isotopic composition for pallid sturgeon were consistent with gut content analyses indicating an ontogenetic diet shift from invertebrates to fish prey at sizes of >500-mm fork length (FL) in all three segments of the Missouri River. Isotopic patterns revealed shovelnose sturgeon did not experience an ontogenetic shift in diet and used similar prey resources as small (<500-mm FL) pallid sturgeon in the two downstream segments. We found stable isotope analysis to be an effective tool for evaluating the trophic position of sturgeons within a large river food web.

  5. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.

    Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihoodmore » value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were, respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as “excellent,” “adequate,” or “insufficient.” The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as “excellent” or “adequate” by both readers. Conclusions: The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.« less

  6. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    PubMed

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. New Stopping Criteria for Segmenting DNA Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wentian

    2001-06-18

    We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less

  8. Application of indigenous sulfur-oxidizing bacteria from municipal wastewater to selectively bioleach phosphorus from high-phosphorus iron ore: effect of particle size.

    PubMed

    Shen, Shaobo; Rao, Ruirui; Wang, Jincao

    2013-01-01

    The effects of ore particle size on selectively bioleaching phosphorus (P) from high-phosphorus iron ore were studied. The average contents of P and Fe in the iron ore were 1.06 and 47.90% (w/w), respectively. The particle sizes of the ores used ranged from 58 to 3350 microm. It was found that the indigenous sulfur-oxidizing bacteria from municipal wastewater could grow well in the slurries of solid high-phosphorus iron ore and municipal wastewater. The minimum bioleaching pH reached for the current work was 0.33. The P content in bioleached iron ore reduced slightly with decreasing particle size, while the removal percentage of Fe decreased appreciably with decreasing particle size. The optimal particle size fraction was 58-75 microm, because the P content in bioleached iron ore reached a minimum of 0.16% (w/w), the removal percentage of P attained a maximum of 86.7%, while the removal percentage of Fe dropped to a minimum of 1.3% and the Fe content in bioleached iron ore was a maximum of 56.4% (w/w) in this case. The iron ores thus obtained were suitable to be used in the iron-making process. The removal percentage of ore solid decreased with decreasing particle size at particle size range of 106-3350 microm. The possible reasons resulting in above phenomena were explored in the current work. It was inferred that the particle sizes of the iron ore used in this work have no significant effect on the viability of the sulfur-oxidizing bacteria.

  9. Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Stephens, T.; Holmes, D. R.; Linte, C.; Packer, D. L.; Robb, R. A.

    2013-03-01

    Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.

  10. Multiple neutral density measurements in the lower thermosphere with cold-cathode ionization gauges

    NASA Astrophysics Data System (ADS)

    Lehmacher, G. A.; Gaulden, T. M.; Larsen, M. F.; Craven, J. D.

    2013-01-01

    Cold-cathode ionization gauges were used for rocket-borne measurements of total neutral density and temperature in the aurorally forced lower thermosphere between 90 and 200 km. A commercial gauge was adapted as a low-cost instrument with a spherical antechamber for measurements in molecular flow conditions. Three roll-stabilized payloads on different trajectories each carried two instruments for measurements near the ram flow direction along the respective upleg and downleg segments of a flight path, and six density profiles were obtained within a period of 22 min covering spatial separations up to 200 km. The density profiles were integrated below 125 km to yield temperatures. The mean temperature structure was similar for all six profiles with two mesopause minima near 110 and 101 km, however, for the downleg profiles, the upper minimum was warmer and the lower minimum was colder by 20-30 K indicating significant variability over horizontal scales of 100-200 km. The upper temperature minimum coincided with maximum horizontal winds speeds, exceeding 170 m/s.

  11. New presentation method for magnetic resonance angiography images based on skeletonization

    NASA Astrophysics Data System (ADS)

    Nystroem, Ingela; Smedby, Orjan

    2000-04-01

    Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.

  12. Sample size of the reference sample in a case-augmented study.

    PubMed

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. 50 CFR 622.48 - Adjustment of management measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... biomass achieved by fishing at MSY (BMSY) (or proxy), maximum fishing mortality threshold (MFMT), minimum... biomass achieved by fishing at MSY (BMSY), minimum stock size threshold (MSST), and maximum fishing.... MSY, OY, and TAC. (f) South Atlantic snapper-grouper and wreckfish. Biomass levels, age-structured...

  14. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  15. 77 FR 5454 - Modifications to Minimum Present Value Requirements for Partial Annuity Distribution Options...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-03

    ... December 2012 segment rates would have been $1,000 x 12 x the deferred annuity factor of 6.558, or $78,696..., which represents 12.71% of the single sum value of W's full accrued benefit ($10,000 / $78,696 = 12.71... a 100% joint-and-survivor annuity beginning at age 55 is no less than 87.29% x $800, or $698.32 per...

  16. Performance evaluation of power transmission coils for powering endoscopic wireless capsules.

    PubMed

    Basar, Md Rubel; Ahmad, Mohd Yazed; Cho, Jongman; Ibrahim, Fatimah

    2015-01-01

    This paper presents an analysis of H-field generated by a simple solenoid, pair of solenoids, pair of double-layer solenoids, segmented-solenoid, and Helmholtz power transmission coils (PTCs) to power an endoscopic wireless capsule (WC). The H-fields were computed using finite element analysis based on partial differential equations. Three parameters were considered in the analysis: i) the maximum level of H-field (Hmax) to which the patient's body would be exposed, ii) the minimum level of H-field (Hmin) effective for power transmission, and iii) uniformity of H-field. We validated our analysis by comparing the computed data with data measured from a fabricated Helmholtz PTC. This analysis disclosed that at the same excitation power, all the PTCs are able to transfer same amount of minimum usable power since they generated almost equal value of Hmin. The level of electromagnetic exposure and power transfer stability across all the PTCs would vary significantly which is mainly due to the different level of Hmax and H-field uniformity. The segmented solenoid PTC would cause the lowest exposure and this PTC can transfer the maximum amount of power. The Helmholtz PTC would be able to transfer the most stable power with a moderate level of exposure.

  17. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits.

    PubMed

    Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G

    2017-12-05

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.

  18. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.

    PubMed

    Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D

    2008-05-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  19. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  20. 78 FR 3923 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... impact of tick sizes on small and middle capitalization companies, the economic consequences (including the costs and benefits) of increasing or decreasing minimum tick sizes, and whether other policy... the second panel will address the impact of tick sizes on the securities market in general, including...

  1. Protein resistance efficacy of PEO-silane amphiphiles: Dependence on PEO-segment length and concentration

    PubMed Central

    Rufin, Marc A.; Barry, Mikayla E.; Adair, Paige A.; Hawkins, Melissa L.; Raymond, Jeffery E.; Grunlan, Melissa A.

    2016-01-01

    In contrast to modification with conventional PEO-silanes (i.e. no siloxane tether), silicones with dramatically enhanced protein resistance have been previously achieved via bulk-modification with poly (ethylene oxide) (PEO)-silane amphiphiles α-(EtO)3Si(CH2)2-oligodimethylsiloxane13-block-PEOn-OCH3 when n = 8 and 16 but not when n = 3. In this work, their efficacy was evaluated in terms of optimal PEO-segment length and minimum concentration required in silicone. For each PEO-silane amphiphile (n = 3, 8, and 16), five concentrations (5, 10, 25, 50, and 100 μmol per 1 g silicone) were evaluated. Efficacy was quantified in terms of the modified silicones’ abilities to undergo rapid, water-driven surface restructuring to form hydrophilic surfaces as well as resistance to fibrinogen adsorption. Only n = 8 and 16 were effective, with a lower minimum concentration in silicone required for n = 8 (10 μmol per 1 g silicone) versus n = 16 (25 μmol per 1 g silicone). Statement of Significance Silicone is commonly used for implantable medical devices, but its hydrophobic surface promotes protein adsorption which leads to thrombosis and infection. Typical methods to incorporate poly(ethylene oxide) (PEO) into silicones have not been effective due to the poor migration of PEO to the surface-biological interface. In this work, PEO-silane amphiphiles – comprised of a siloxane tether (m = 13) and variable PEO segment lengths (n = 3, 8, 16) – were blended into silicone to improve its protein resistance. The efficacy of the amphiphiles was determined to be dependent on PEO length. With the intermediate PEO length (n = 8), water-driven surface restructuring and resulting protein resistance was achieved with a concentration of only 1.7 wt%. PMID:27090588

  2. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  3. GPU-based relative fuzzy connectedness image segmentation.

    PubMed

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  4. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  5. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  6. Project W-320, 241-C-106 sluicing electrical calculations, Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. These calculations are required: To determine the power requirements needed to power electrical heat tracing segments contained within three manufactured insulated tubing assemblies; To verify thermal adequacy of tubing assembly selection by others; To size the heat tracing feeder and branch circuit conductors and conduits; To size protective circuit breaker and fuses; and To accomplish thermal design for two electrical heat tracing segments: One at C-106 tank riser 7 (CCTV) and one at the exhaust hatchway (condensate drain). Contents include: C-Farm electrical heat tracing;more » Cable ampacity, lighting, conduit fill and voltage drop; and Control circuit sizing and voltage drop analysis for the seismic shutdown system.« less

  7. Menu Plans: Maximum Nutrition for Minimum Cost.

    ERIC Educational Resources Information Center

    Texas Child Care, 1995

    1995-01-01

    Suggests that menu planning is the key to getting maximum nutrition in day care meals and snacks for minimum cost. Explores United States Department of Agriculture food pyramid guidelines for children and tips for planning menus and grocery shopping. Includes suggested meal patterns and portion sizes. (HTH)

  8. MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Public Instruction, Harrisburg.

    MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…

  9. Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft

    PubMed Central

    Bousdras, Vasilios A.; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R

    2014-01-01

    Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft. PMID:24987601

  10. Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft.

    PubMed

    Bousdras, Vasilios A; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R

    2014-01-01

    Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft.

  11. Nephron segment-specific gene expression using AAV vectors.

    PubMed

    Asico, Laureano D; Cuevas, Santiago; Ma, Xiaobo; Jose, Pedro A; Armando, Ines; Konkalmatt, Prasad R

    2018-02-26

    AAV9 vector provides efficient gene transfer in all segments of the renal nephron, with minimum expression in non-renal cells, when administered retrogradely via the ureter. It is important to restrict the transgene expression to the desired cell type within the kidney, so that the physiological endpoints represent the function of the transgene expressed in that specific cell type within kidney. We hypothesized that segment-specific gene expression within the kidney can be accomplished using the highly efficient AAV9 vectors carrying the promoters of genes that are expressed exclusively in the desired segment of the nephron in combination with administration by retrograde infusion into the kidney via the ureter. We constructed AAV vectors carrying eGFP under the control of: kidney-specific cadherin (KSPC) gene promoter for expression in the entire nephron; Na + /glucose co-transporter (SGLT2) gene promoter for expression in the S1 and S2 segments of the proximal tubule; sodium, potassium, 2 chloride co-transporter (NKCC2) gene promoter for expression in the thick ascending limb of Henle's loop (TALH); E-cadherin (ECAD) gene promoter for expression in the collecting duct (CD); and cytomegalovirus (CMV) early promoter that provides expression in most of the mammalian cells, as control. We tested the specificity of the promoter constructs in vitro for cell type-specific expression in mouse kidney cells in primary culture, followed by retrograde infusion of the AAV vectors via the ureter in the mouse. Our data show that AAV9 vector, in combination with the segment-specific promoters administered by retrograde infusion via the ureter, provides renal nephron segment-specific gene expression. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  12. A model-based approach for estimation of changes in lumbar segmental kinematics associated with alterations in trunk muscle forces.

    PubMed

    Shojaei, Iman; Arjmand, Navid; Meakin, Judith R; Bazrgari, Babak

    2018-03-21

    The kinematics information from imaging, if combined with optimization-based biomechanical models, may provide a unique platform for personalized assessment of trunk muscle forces (TMFs). Such a method, however, is feasible only if differences in lumbar spine kinematics due to differences in TMFs can be captured by the current imaging techniques. A finite element model of the spine within an optimization procedure was used to estimate segmental kinematics of lumbar spine associated with five different sets of TMFs. Each set of TMFs was associated with a hypothetical trunk neuromuscular strategy that optimized one aspect of lower back biomechanics. For each set of TMFs, the segmental kinematics of lumbar spine was estimated for a single static trunk flexed posture involving, respectively, 40° and 10° of thoracic and pelvic rotations. Minimum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 0° to 0.7° and 0 mm to 0.04 mm, respectively. Maximum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 2.4° to 7.6° and 0.11 mm to 0.39 mm, respectively. The differences in kinematics of lumbar segments between each combination of two sets of TMFs in 97% of cases for angular deformation and 55% of cases for translational deformation were within the reported accuracy of current imaging techniques. Therefore, it might be possible to use image-based kinematics of lumbar segments along with computational modeling for personalized assessment of TMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Could offset cluster reveal strong earthquake pattern?——case study from Haiyuan Fault

    NASA Astrophysics Data System (ADS)

    Ren, Z.; Zhang, Z.; Chen, T.; Yin, J.; Zhang, P. Z.; Zheng, W.; Zhang, H.; Li, C.

    2016-12-01

    Since 1990s, researchers tried to use offset clusters to study strong earthquake patterns. However, due to the limitation of quantity of offset data, it was not widely used until recent years with the rapid development of high-resolution topographic data, such as remote sensing images, LiDAR. In this study, we use airborne LiDAR data to re-evaluate the cumulative offsets and co-seismic offset of the 1920 Haiyuan Ms 8.5 earthquake along the western and middle segments of the co-seismic surface rupture zone. Our LiDAR data indicate the offset observations along both the western and middle segments fall into five groups. The group with minimum slip amount is associated with the 1920 Haiyuan Ms 8.5 earthquake, which ruptured both the western and middle segments. Our research highlights two new interpretations: firstly, the previously reported maximum displacement of the 1920 Earthquake is likely to be produced by at least two earthquakes; secondly, Our results reveal that the Cumulative Offset Probability Density (COPD) peaks of same offset amount on western segment and middles segment did not corresponding to each other one by one. The ages of the paleoearthquakes indicate the offsets are not accumulated during same period. We suggest that any discussion of the rupture pattern of a certain fault based on the offset data should also consider fault segmentation and paleoseismological data; Therefore, using the COPD peaks for studying the number of palaeo-events and their rupture patterns, the COPD peaks should be computed and analyzed on fault sub-sections and not entire fault zones. Our results reveal that the rupture pattern on the western and middle segment of the Haiyuan Fault is different from each other, which provide new data for the regional seismic potential analysis.

  14. Determining the maximum diameter for holes in the shoe without compromising shoe integrity when using a multi-segment foot model.

    PubMed

    Shultz, Rebecca; Jenkyn, Thomas

    2012-01-01

    Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.

  15. Job-Transitions in the Administrative Labor Market in Higher Education: Some Methodological Considerations.

    ERIC Educational Resources Information Center

    Smolansky, Bettie M.

    The question of whether the market for administrators is segmented by institutional types (i.e., region, affiliation, size, mission, and resource level) was investigated. One facet of the research was the applicability of segmentation theory to the occupational labor market for college managers. Principal data were provided by career histories of…

  16. Valley segments, stream reaches, and channel units [Chapter 2

    Treesearch

    Peter A. Bisson; David R. Montgomery; John M. Buffington

    2006-01-01

    Valley segments, stream reaches, and channel units are three hierarchically nested subdivisions of the drainage network (Frissell et al. 1986), falling in size between landscapes and watersheds (see Chapter 1) and individual point measurements made along the stream network (Table 2.1; also see Chapters 3 and 4). These three subdivisions compose the habitat for large,...

  17. 48 CFR 8.1102 - Presolicitation requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...

  18. 48 CFR 8.1102 - Presolicitation requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...

  19. 48 CFR 8.1102 - Presolicitation requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...

  20. 48 CFR 8.1102 - Presolicitation requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...

  1. 48 CFR 8.1102 - Presolicitation requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...

  2. Automated detection of preserved photoreceptor on optical coherence tomography in choroideremia based on machine learning.

    PubMed

    Wang, Zhuo; Camino, Acner; Hagag, Ahmed M; Wang, Jie; Weleber, Richard G; Yang, Paul; Pennesi, Mark E; Huang, David; Li, Dengwang; Jia, Yali

    2018-05-01

    Optical coherence tomography (OCT) can demonstrate early deterioration of the photoreceptor integrity caused by inherited retinal degeneration diseases (IRDs). A machine learning method based on random forests was developed to automatically detect continuous areas of preserved ellipsoid zone structure (an easily recognizable part of the photoreceptors on OCT) in 16 eyes of patients with choroideremia (a type of IRD). Pseudopodial extensions protruding from the preserved ellipsoid zone areas are detected separately by a local active contour routine. The algorithm is implemented on en face images with minimum segmentation requirements, only needing delineation of the Bruch's membrane, thus evading the inaccuracies and technical challenges associated with automatic segmentation of the ellipsoid zone in eyes with severe retinal degeneration. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Nonsurgically managed patients with degenerative spondylolisthesis: a 10- to 18-year follow-up study.

    PubMed

    Matsunaga, S; Ijiri, K; Hayashi, K

    2000-10-01

    Controversy exists concerning the indications for surgery and choice of surgical procedure for patients with degenerative spondylolisthesis. The goals of this study were to determine the clinical course of nonsurgically managed patients with degenerative spondylolisthesis as well as the indications for surgery. A total of 145 nonsurgically managed patients with degenerative spondylolisthesis were examined annually for a minimum of 10 years follow-up evaluation. Radiographic changes, changes in clinical symptoms, and functional prognosis were surveyed. Progressive spondylolisthesis was observed in 49 patients (34%). There was no correlation between changes in clinical symptoms and progression of spondylolisthesis. The intervertebral spaces of the slipped segments were decreased significantly in size during follow-up examination in patients in whom no progression was found. Low-back pain improved following a decrease in the total intervertebral space size. A total of 84 (76%) of 110 patients who had no neurological deficits at initial examination remained without neurological deficit after 10 years of follow up. Twenty-nine (83%) of the 35 patients who had neurological symptoms, such as intermittent claudication or vesicorectal disorder, at initial examination and refused surgery experienced neurological deterioration. The final prognosis for these patients was very poor. Low-back pain was improved by restabilization. Conservative treatment is useful for patients who have low-back pain with or without pain in the lower extremities. Surgical intervention is indicated for patients with neurological symptoms including intermittent claudication or vesicorectal disorder, provided that a good functional outcome can be achieved.

  4. Modeling the Soft Geometry of Biological Membranes

    NASA Astrophysics Data System (ADS)

    Daly, K.

    This dissertation presents work done applying the techniques of physics to biological systems. The difference in length scales of the thickness of the phospolipid bilayer and overall size of a biological cell allows bilayer to be modeled elastically as a thin sheet. The Helfrich free energy is extended applied to models representing various biological systems, in order to find quasi-equilibrium states as well as transitions between states. Morphologies are approximated as axially sym-metric. Stable morphologies are de-termined analytically and through the use of computer simulation. The simple morphologies examined analytically give a model for the pearling transition seen in growing biological cells. An analytic model of celluar bulging in gram-negative bacteria predicts a critical pore radius for bulging of 20 nanometers. This model is extended to the membrane dynamics of human red blood cells, predicting three morphologic phases which are seen in vivo. A computer simulation was developed to study more complex morphologies with models representing different bilayer compositions. Single and multi-component bilayer models reproduce morphologies previously predicted by Seifert. A mean field model representing the intrinsic curvature of proteins coupling to membrane curvature is used to explore the stability of the particular morphology of rod outer segment cells. The process of pore formation and expansion in cell-cell fusion is not well understood. Simulation of the pore created in cell-cell fusion led to the finding of a minimal pore radius required for pore expansion, suggesting pores formed in nature are formed with a minimum size.

  5. 46 CFR 108.437 - Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Pipe sizes and discharge rates for enclosed ventilation... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. (a) The minimum pipe size for the initial...

  6. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  7. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  8. Laser beam micro-milling of nickel alloy: dimensional variations and RSM optimization of laser parameters

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Alahmari, Abdulrahman M.; Darwish, Saied; Naveed, Madiha

    2016-12-01

    Micro-channels are considered as the integral part of several engineering devices such as micro-channel heat exchangers, micro-coolers, micro-pulsating heat pipes and micro-channels used in gas turbine blades for aerospace applications. In such applications, a fluid flow is required to pass through certain micro-passages such as micro-grooves and micro-channels. The fluid flow characteristics (flow rate, turbulence, pressure drop and fluid dynamics) are mainly established based on the size and accuracy of micro-passages. Variations (oversizing and undersizing) in micro-passage's geometry directly affect the fluid flow characteristics. In this study, the micro-channels of several sizes are fabricated in well-known aerospace nickel alloy (Inconel 718) through laser beam micro-milling. The variations in geometrical characteristics of different-sized micro-channels are studied under the influences of different parameters of Nd:YAG laser. In order to have a minimum variation in the machined geometries of each size of micro-channel, the multi-objective optimization of laser parameters has been carried out utilizing the response surface methodology approach. The objective was set to achieve the targeted top widths and depths of micro-channels with minimum degree of taperness associated with the micro-channel's sidewalls. The optimized sets of laser parameters proposed for each size of micro-channel can be used to fabricate the micro-channels in Inconel 718 with minimum amount of geometrical variations.

  9. Extinction-effective population index: incorporating life-history variations in population viability analysis.

    PubMed

    Fujiwara, Masami

    2007-09-01

    Viability status of populations is a commonly used measure for decision-making in the management of populations. One of the challenges faced by managers is the need to consistently allocate management effort among populations. This allocation should in part be based on comparison of extinction risks among populations. Unfortunately, common criteria that use minimum viable population size or count-based population viability analysis (PVA) often do not provide results that are comparable among populations, primarily because they lack consistency in determining population size measures and threshold levels of population size (e.g., minimum viable population size and quasi-extinction threshold). Here I introduce a new index called the "extinction-effective population index," which accounts for differential effects of demographic stochasticity among organisms with different life-history strategies and among individuals in different life stages. This index is expected to become a new way of determining minimum viable population size criteria and also complement the count-based PVA. The index accounts for the difference in life-history strategies of organisms, which are modeled using matrix population models. The extinction-effective population index, sensitivity, and elasticity are demonstrated in three species of Pacific salmonids. The interpretation of the index is also provided by comparing them with existing demographic indices. Finally, a measure of life-history-specific effect of demographic stochasticity is derived.

  10. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  11. AN EXPERIMENTAL ASSESSMENT OF MINIMUM MAPPING UNIT SIZE

    EPA Science Inventory

    Land-cover (LC) maps derived from remotely sensed data are often presented using a minimum mapping unit (MMU). The choice of a MMU that is appropriate for the projected use of a classification is important. The objective of this experiment was to determine the optimal MMU of a L...

  12. Minimum-Impact Camping in the Front Woods.

    ERIC Educational Resources Information Center

    Schatz, Curt

    1994-01-01

    Minimum-impact camping techniques that can be applied to resident camp programs include controlling group size and behavior, designing camp sites, moving groups frequently, proper use of fires, proper disposal of food and human wastes, use of biodegradable soaps, and encouraging staff and camper awareness of impacts on the environment. (LP)

  13. Effect of supersaturated oxygen delivery on infarct size after percutaneous coronary intervention in acute myocardial infarction.

    PubMed

    Stone, Gregg W; Martin, Jack L; de Boer, Menko-Jan; Margheri, Massimo; Bramucci, Ezio; Blankenship, James C; Metzger, D Christopher; Gibbons, Raymond J; Lindsay, Barbara S; Weiner, Bonnie H; Lansky, Alexandra J; Krucoff, Mitchell W; Fahy, Martin; Boscardin, W John

    2009-10-01

    Myocardial salvage is often suboptimal after percutaneous coronary intervention in ST-segment elevation myocardial infarction. Posthoc subgroup analysis from a previous trial (AMIHOT I) suggested that intracoronary delivery of supersaturated oxygen (SSO(2)) may reduce infarct size in patients with large ST-segment elevation myocardial infarction treated early. A prospective, multicenter trial was performed in which 301 patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset were randomized to a 90-minute intracoronary SSO(2) infusion in the left anterior descending artery infarct territory (n=222) or control (n=79). The primary efficacy measure was infarct size in the intention-to-treat population (powered for superiority), and the primary safety measure was composite major adverse cardiovascular events at 30 days in the intention-to-treat and per-protocol populations (powered for noninferiority), with Bayesian hierarchical modeling used to allow partial pooling of evidence from AMIHOT I. Among 281 randomized patients with tc-99m-sestamibi single-photon emission computed tomography data in AMIHOT II, median (interquartile range) infarct size was 26.5% (8.5%, 44%) with control compared with 20% (6%, 37%) after SSO(2). The pooled adjusted infarct size was 25% (7%, 42%) with control compared with 18.5% (3.5%, 34.5%) after SSO(2) (P(Wilcoxon)=0.02; Bayesian posterior probability of superiority, 96.9%). The Bayesian pooled 30-day mean (+/-SE) rates of major adverse cardiovascular events were 5.0+/-1.4% for control and 5.9+/-1.4% for SSO(2) by intention-to-treat, and 5.1+/-1.5% for control and 4.7+/-1.5% for SSO(2) by per-protocol analysis (posterior probability of noninferiority, 99.5% and 99.9%, respectively). Among patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset, infusion of SSO(2) into the left anterior descending artery infarct territory results in a significant reduction in infarct size with noninferior rates of major adverse cardiovascular events at 30 days. Clinical Trial Registration- clinicaltrials.gov Identifier: NCT00175058.

  14. Could the peristaltic transition zone be caused by non-uniform esophageal muscle fiber architecture? A simulation study.

    PubMed

    Kou, W; Pandolfino, J E; Kahrilas, P J; Patankar, N A

    2017-06-01

    Based on a fully coupled computational model of esophageal transport, we analyzed how varied esophageal muscle fiber architecture and/or dual contraction waves (CWs) affect bolus transport. Specifically, we studied the luminal pressure profile in those cases to better understand possible origins of the peristaltic transition zone. Two groups of studies were conducted using a computational model. The first studied esophageal transport with circumferential-longitudinal fiber architecture, helical fiber architecture and various combinations of the two. In the second group, cases with dual CWs and varied muscle fiber architecture were simulated. Overall transport characteristics were examined and the space-time profiles of luminal pressure were plotted and compared. Helical muscle fiber architecture featured reduced circumferential wall stress, greater esophageal distensibility, and greater axial shortening. Non-uniform fiber architecture featured a peristaltic pressure trough between two high-pressure segments. The distal pressure segment showed greater amplitude than the proximal segment, consistent with experimental data. Dual CWs also featured a pressure trough between two high-pressure segments. However, the minimum pressure in the region of overlap was much lower, and the amplitudes of the two high-pressure segments were similar. The efficacy of esophageal transport is greatly affected by muscle fiber architecture. The peristaltic transition zone may be attributable to non-uniform architecture of muscle fibers along the length of the esophagus and/or dual CWs. The difference in amplitude between the proximal and distal pressure segments may be attributable to non-uniform muscle fiber architecture. © 2017 John Wiley & Sons Ltd.

  15. Competition between protein folding and aggregation: A three-dimensional lattice-model simulation

    NASA Astrophysics Data System (ADS)

    Bratko, D.; Blanch, H. W.

    2001-01-01

    Aggregation of protein molecules resulting in the loss of biological activity and the formation of insoluble deposits represents a serious problem for the biotechnology and pharmaceutical industries and in medicine. Considerable experimental and theoretical efforts are being made in order to improve our understanding of, and ability to control, the process. In the present work, we describe a Monte Carlo study of a multichain system of coarse-grained model proteins akin to lattice models developed for simulations of protein folding. The model is designed to examine the competition between intramolecular interactions leading to the native protein structure, and intermolecular association, resulting in the formation of aggregates of misfolded chains. Interactions between the segments are described by a variation of the Go potential [N. Go and H. Abe, Biopolymers 20, 1013 (1981)] that extends the recognition between attracting types of segments to pairs on distinct chains. For the particular model we adopt, the global free energy minimum of a pair of protein molecules corresponds to a dimer of native proteins. When three or more molecules interact, clusters of misfolded chains can be more stable than aggregates of native folds. A considerable fraction of native structure, however, is preserved in these cases. Rates of conformational changes rapidly decrease with the size of the protein cluster. Within the timescale accessible to computer simulations, the folding-aggregation balance is strongly affected by kinetic considerations. Both the native form and aggregates can persist in metastable states, even if conditions such as temperature or concentration favor a transition to an alternative form. Refolding yield can be affected by the presence of an additional polymer species mimicking the function of a molecular chaperone.

  16. Design and Analysis of Mirror Modules for IXO and Beyond

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; Powell, Cory; Saha, Timo T.; Zhang, William W.

    2011-01-01

    Advancements in X-ray astronomy demand thin, light, and closely packed thin optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The functionality requirements of such a mirror module are well understood. A baseline modular concept for the proposed International X-Ray Observatory (IXO) Flight Mirror Assembly (FMA) consisting of 14,000 glass mirror segments divided into 60 modules was developed and extensively analyzed. Through this development, our understanding of module loads, mirror stress, thermal performance, and gravity distortion have greatly progressed. The latest progress in each of these areas is discussed herein. Gravity distortion during horizontal X-ray testing and on-orbit thermal performance have proved especially difficult design challenges. In light of these challenges, fundamental trades in modular X-ray mirror design have been performed. Future directions in module X-ray mirror design are explored including the development of a 1.8 m diameter FMA utilizing smaller mirror modules. The effect of module size on mirror stress, module self-weight distortion, thermal control, and range of segment sizes required is explored with advantages demonstrated from smaller module size in most cases.

  17. Influence of riparian and watershed alterations on sandbars in a Great Plains river

    USGS Publications Warehouse

    Fischer, Jeffrey M.; Paukert, Craig P.; Daniels, M.L.

    2014-01-01

    Anthropogenic alterations have caused sandbar habitats in rivers and the biota dependent on them to decline. Restoring large river sandbars may be needed as these habitats are important components of river ecosystems and provide essential habitat to terrestrial and aquatic organisms. We quantified factors within the riparian zone of the Kansas River, USA, and within its tributaries that influenced sandbar size and density using aerial photographs and land use/land cover (LULC) data. We developed, a priori, 16 linear regression models focused on LULC at the local, adjacent upstream river bend, and the segment (18–44 km upstream) scales and used an information theoretic approach to determine what alterations best predicted the size and density of sandbars. Variation in sandbar density was best explained by the LULC within contributing tributaries at the segment scale, which indicated reduced sandbar density with increased forest cover within tributary watersheds. Similarly, LULC within contributing tributary watersheds at the segment scale best explained variation in sandbar size. These models indicated that sandbar size increased with agriculture and forest and decreased with urban cover within tributary watersheds. Our findings suggest that sediment supply and delivery from upstream tributary watersheds may be influential on sandbars within the Kansas River and that preserving natural grassland and reducing woody encroachment within tributary watersheds in Great Plains rivers may help improve sediment delivery to help restore natural river function.

  18. On the Relationship Between Spotless Days and the Sunspot Cycle: A Supplement

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    This study provides supplemental material to an earlier study concerning the relationship between spotless days and the sunspot cycle. Our previous study, Technical Publication (TP)-2005-213608 determined the timing and size of sunspot minimum and maximum for the new sunspot cycle, relative to the occurrence of the first spotless day during the declining phase of the old sunspot cycle and the last spotless day during the rising portion of the new cycle. Because the number of spotless days (NSD) rapidly increases as the cycle nears sunspot minimum and rapidly decreases thereafter, the size and timing of sunspot minimum and maximum might be more accurately determined using a higher threshold for comparison, rather than using the first and last spotless day occurrences. It is this aspect that is investigated more thoroughly in this TP.

  19. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  20. Quantitative image analysis for evaluating the coating thickness and pore distribution in coated small particles.

    PubMed

    Laksmana, F L; Van Vliet, L J; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K

    2009-04-01

    This study aims to develop a characterization method for coating structure based on image analysis, which is particularly promising for the rational design of coated particles in the pharmaceutical industry. The method applies the MATLAB image processing toolbox to images of coated particles taken with Confocal Laser Scanning Microscopy (CSLM). The coating thicknesses have been determined along the particle perimeter, from which a statistical analysis could be performed to obtain relevant thickness properties, e.g. the minimum coating thickness and the span of the thickness distribution. The characterization of the pore structure involved a proper segmentation of pores from the coating and a granulometry operation. The presented method facilitates the quantification of porosity, thickness and pore size distribution of a coating. These parameters are considered the important coating properties, which are critical to coating functionality. Additionally, the effect of the coating process variations on coating quality can straight-forwardly be assessed. Enabling a good characterization of the coating qualities, the presented method can be used as a fast and effective tool to predict coating functionality. This approach also enables the influence of different process conditions on coating properties to be effectively monitored, which latterly leads to process tailoring.

  1. Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.

    PubMed

    Hirschfeld, T

    1974-07-01

    When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.

  2. In Search of Conversational Grain Size: Modelling Semantic Structure Using Moving Stanza Windows

    ERIC Educational Resources Information Center

    Siebert-Evenstone, Amanda L.; Irgens, Golnaz Arastoopour; Collier, Wesley; Swiecki, Zachari; Ruis, Andrew R.; Shaffer, David Williamson

    2017-01-01

    Analyses of learning based on student discourse need to account not only for the content of the utterances but also for the ways in which students make connections across turns of talk. This requires segmentation of discourse data to define when connections are likely to be meaningful. In this paper, we present an approach to segmenting data for…

  3. The Axolotl Fibula as a Model for the Induction of Regeneration across Large Segment Defects in Long Bones of the Extremities

    PubMed Central

    Chen, Xiaoping; Song, Fengyu; Jhamb, Deepali; Li, Jiliang; Bottino, Marco C.; Palakal, Mathew J.; Stocum, David L.

    2015-01-01

    We tested the ability of the axolotl (Ambystoma mexicanum) fibula to regenerate across segment defects of different size in the absence of intervention or after implant of a unique 8-braid pig small intestine submucosa (SIS) scaffold, with or without incorporated growth factor combinations or tissue protein extract. Fractures and defects of 10% and 20% of the total limb length regenerated well without any intervention, but 40% and 50% defects failed to regenerate after either simple removal of bone or implanting SIS scaffold alone. By contrast, scaffold soaked in the growth factor combination BMP-4/HGF or in protein extract of intact limb tissue promoted partial or extensive induction of cartilage and bone across 50% segment defects in 30%-33% of cases. These results show that BMP-4/HGF and intact tissue protein extract can promote the events required to induce cartilage and bone formation across a segment defect larger than critical size and that the long bones of axolotl limbs are an inexpensive model to screen soluble factors and natural and synthetic scaffolds for their efficacy in stimulating this process. PMID:26098852

  4. Comparison of using single- or multi-polarimetric TerraSAR-X images for segmentation and classification of man-made maritime objects

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Saur, Günter

    2011-11-01

    Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.

  5. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.

  6. Kinematic control of walking.

    PubMed

    Lacquaniti, F; Ivanenko, Y P; Zago, M

    2002-10-01

    The planar law of inter-segmental co-ordination we described may emerge from the coupling of neural oscillators between each other and with limb mechanical oscillators. Muscle contraction intervenes at variable times to re-excite the intrinsic oscillations of the system when energy is lost. The hypothesis that a law of coordinative control results from a minimal active tuning of the passive inertial and viscoelastic coupling among limb segments is congruent with the idea that movement has evolved according to minimum energy criteria (1, 8). It is known that multi-segment motion of mammals locomotion is controlled by a network of coupled oscillators (CPGs, see 18, 33, 37). Flexible combination of unit oscillators gives rise to different forms of locomotion. Inter-oscillator coupling can be modified by changing the synaptic strength (or polarity) of the relative spinal connections. As a result, unit oscillators can be coupled in phase, out of phase, or with a variable phase, giving rise to different behaviors, such as speed increments or reversal of gait direction (from forward to backward). Supra-spinal centers may drive or modulate functional sets of coordinating interneurons to generate different walking modes (or gaits). Although it is often assumed that CPGs control patterns of muscle activity, an equally plausible hypothesis is that they control patterns of limb segment motion instead (22). According to this kinematic view, each unit oscillator would directly control a limb segment, alternately generating forward and backward oscillations of the segment. Inter-segmental coordination would be achieved by coupling unit oscillators with a variable phase. Inter-segmental kinematic phase plays the role of global control variable previously postulated for the network of central oscillators. In fact, inter-segmental phase shifts systematically with increasing speed both in man (4) and cat (38). Because this phase-shift is correlated with the net mechanical power output over a gait cycle (3, 4), phase control could be used for limiting the overall energy expenditure with increasing speed (22). Adaptation to different walking conditions, such as changes in body posture, body weight unloading and backward walk, also involves inter-segmental phase tuning, as does the maturation of limb kinematics in toddlers.

  7. Chain and mirophase-separated structures of ultrathin polyurethane films

    NASA Astrophysics Data System (ADS)

    Kojio, Ken; Uchiba, Yusuke; Yamamoto, Yasunori; Motokucho, Suguru; Furukawa, Mutsuhisa

    2009-08-01

    Measurements are presented how chain and microphase-separated structures of ultrathin polyurethane (PU) films are controlled by the thickness. The film thickness is varied by a solution concentration for spin coating. The systems are PUs prepared from commercial raw materials. Fourier-transform infrared spectroscopic measurement revealed that the degree of hydrogen bonding among hard segment chains decreased and increased with decreasing film thickness for strong and weak microphase separation systems, respectively. The microphase-separated structure, which is formed from hard segment domains and a surrounding soft segment matrix, were observed by atomic force microscopy. The size of hard segment domains decreased with decreasing film thickness, and possibility of specific orientation of the hard segment chains was exhibited for both systems. These results are due to decreasing space for the formation of the microphase-separated structure.

  8. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  9. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  10. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Within-brain classification for brain tumor segmentation.

    PubMed

    Havaei, Mohammad; Larochelle, Hugo; Poulin, Philippe; Jodoin, Pierre-Marc

    2016-05-01

    In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction. We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain. As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.

  12. Evaluating unsupervised methods to size and classify suspended particles using digital in-line holography

    USGS Publications Warehouse

    Davies, Emlyn J.; Buscombe, Daniel D.; Graham, George W.; Nimmo-Smith, W. Alex M.

    2015-01-01

    Substantial information can be gained from digital in-line holography of marine particles, eliminating depth-of-field and focusing errors associated with standard lens-based imaging methods. However, for the technique to reach its full potential in oceanographic research, fully unsupervised (automated) methods are required for focusing, segmentation, sizing and classification of particles. These computational challenges are the subject of this paper, in which we draw upon data collected using a variety of holographic systems developed at Plymouth University, UK, from a significant range of particle types, sizes and shapes. A new method for noise reduction in reconstructed planes is found to be successful in aiding particle segmentation and sizing. The performance of an automated routine for deriving particle characteristics (and subsequent size distributions) is evaluated against equivalent size metrics obtained by a trained operative measuring grain axes on screen. The unsupervised method is found to be reliable, despite some errors resulting from over-segmentation of particles. A simple unsupervised particle classification system is developed, and is capable of successfully differentiating sand grains, bubbles and diatoms from within the surf-zone. Avoiding miscounting bubbles and biological particles as sand grains enables more accurate estimates of sand concentrations, and is especially important in deployments of particle monitoring instrumentation in aerated water. Perhaps the greatest potential for further development in the computational aspects of particle holography is in the area of unsupervised particle classification. The simple method proposed here provides a foundation upon which further development could lead to reliable identification of more complex particle populations, such as those containing phytoplankton, zooplankton, flocculated cohesive sediments and oil droplets.

  13. Compaction of quasi-one-dimensional elastoplastic materials.

    PubMed

    Shaebani, M Reza; Najafi, Javad; Farnudi, Ali; Bonn, Daniel; Habibi, Mehdi

    2017-06-06

    Insight into crumpling or compaction of one-dimensional objects is important for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here, we report how friction, plasticity and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size and injection force.

  14. Minimum-Time and Vibration Avoidance Attitude Maneuver for Spacecraft with Torque and Momentum Limit Constraints in Redundant Reaction Wheel Configuration

    NASA Technical Reports Server (NTRS)

    Ha, Kong Q.; Femiano, Michael D.; Mosier, Gary E.

    2004-01-01

    In this paper, we present an optimal open-loop slew trajectory algorithm developed at GSFC for the so-called "Yardstick design" of the James Webb Space Telescope (JWST). JWST is an orbiting infrared observatory featuring a lightweight, segmented primary mirror approximately 6 meters in diameter and a sunshield approximately the size of a tennis court. This large, flexible structure will have significant number of lightly damped, dominant flexible modes. With very stringent requirements on pointing accuracy and image quality, it is important that slewing be done within the required time constraint and with minimal induced vibration in order to maximize observing efficiency. With reaction wheels as control actuators, initial wheel speeds as well as individual wheel torque and momentum limits become dominant constraints in slew performance. These constraints must be taken into account when performing slews to ensure that unexpected reaction wheel saturation does not occur, since such saturation leads to control failure in accurately tracking commanded motion and produces high frequency torque components capable of exciting structural modes. A minimum-time constraint is also included and coupled with reaction wheel limit constraints in the optimization to minimize both the effect of the control torque on the flexible body motion and the maneuver time. The optimization is on slew command parameters, such as maximum slew velocity and acceleration, for a given redundant reaction wheel configuration and is based on the dynamic interaction between the spacecraft and reaction wheel motion. Analytical development of the slew algorithm to generate desired slew position, rate, and acceleration profiles to command a feedback/feed forward control system is described. High-fidelity simulation and experimental results are presented to show that the developed slew law achieves the objectives.

  15. Functional significance of the taper of vertebrate cone photoreceptors

    PubMed Central

    Hárosi, Ferenc I.

    2012-01-01

    Vertebrate photoreceptors are commonly distinguished based on the shape of their outer segments: those of cones taper, whereas the ones from rods do not. The functional advantages of cone taper, a common occurrence in vertebrate retinas, remain elusive. In this study, we investigate this topic using theoretical analyses aimed at revealing structure–function relationships in photoreceptors. Geometrical optics combined with spectrophotometric and morphological data are used to support the analyses and to test predictions. Three functions are considered for correlations between taper and functionality. The first function proposes that outer segment taper serves to compensate for self-screening of the visual pigment contained within. The second function links outer segment taper to compensation for a signal-to-noise ratio decline along the longitudinal dimension. Both functions are supported by the data: real cones taper more than required for these compensatory roles. The third function relates outer segment taper to the optical properties of the inner compartment whereby the primary determinant is the inner segment’s ability to concentrate light via its ellipsoid. In support of this idea, the rod/cone ratios of primarily diurnal animals are predicted based on a principle of equal light flux gathering between photoreceptors. In addition, ellipsoid concentration factor, a measure of ellipsoid ability to concentrate light onto the outer segment, correlates positively with outer segment taper expressed as a ratio of characteristic lengths, where critical taper is the yardstick. Depending on a light-funneling property and the presence of focusing organelles such as oil droplets, cone outer segments can be reduced in size to various degrees. We conclude that outer segment taper is but one component of a miniaturization process that reduces metabolic costs while improving signal detection. Compromise solutions in the various retinas and retinal regions occur between ellipsoid size and acuity, on the one hand, and faster response time and reduced light sensitivity, on the other. PMID:22250013

  16. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  17. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  18. Origin of amphibian and avian chromosomes by fission, fusion, and retention of ancestral chromosomes

    PubMed Central

    Voss, Stephen R.; Kump, D. Kevin; Putta, Srikrishna; Pauly, Nathan; Reynolds, Anna; Henry, Rema J.; Basa, Saritha; Walker, John A.; Smith, Jeramiah J.

    2011-01-01

    Amphibian genomes differ greatly in DNA content and chromosome size, morphology, and number. Investigations of this diversity are needed to identify mechanisms that have shaped the evolution of vertebrate genomes. We used comparative mapping to investigate the organization of genes in the Mexican axolotl (Ambystoma mexicanum), a species that presents relatively few chromosomes (n = 14) and a gigantic genome (>20 pg/N). We show extensive conservation of synteny between Ambystoma, chicken, and human, and a positive correlation between the length of conserved segments and genome size. Ambystoma segments are estimated to be four to 51 times longer than homologous human and chicken segments. Strikingly, genes demarking the structures of 28 chicken chromosomes are ordered among linkage groups defining the Ambystoma genome, and we show that these same chromosomal segments are also conserved in a distantly related anuran amphibian (Xenopus tropicalis). Using linkage relationships from the amphibian maps, we predict that three chicken chromosomes originated by fusion, nine to 14 originated by fission, and 12–17 evolved directly from ancestral tetrapod chromosomes. We further show that some ancestral segments were fused prior to the divergence of salamanders and anurans, while others fused independently and randomly as chromosome numbers were reduced in lineages leading to Ambystoma and Xenopus. The maintenance of gene order relationships between chromosomal segments that have greatly expanded and contracted in salamander and chicken genomes, respectively, suggests selection to maintain synteny relationships and/or extremely low rates of chromosomal rearrangement. Overall, the results demonstrate the value of data from diverse, amphibian genomes in studies of vertebrate genome evolution. PMID:21482624

  19. 50 CFR 622.436 - Size limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... intact. (a) Yellowtail snapper. The minimum size limit for yellowtail snapper is 12 inches (30.5 cm), TL... inches (20.3 cm), fork length. [78 FR 22952, Apr. 17, 2013, as amended at 78 FR 45896, July 30, 2013] ...

  20. 50 CFR 622.436 - Size limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... intact. (a) Yellowtail snapper. The minimum size limit for yellowtail snapper is 12 inches (30.5 cm), TL... inches (20.3 cm), fork length. [78 FR 22952, Apr. 17, 2013, as amended at 78 FR 45896, July 30, 2013] ...

  1. SU-C-BRA-07: Virtual Bronchoscopy-Guided IMRT Planning for Mapping and Avoiding Radiation Injury to the Airway Tree in Lung SAbR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawant, A; Modiri, A; Bland, R

    Purpose: Post-treatment radiation injury to central and peripheral airways is a potentially important, yet under-investigated determinant of toxicity in lung stereotactic ablative radiotherapy (SAbR). We integrate virtual bronchoscopy technology into the radiotherapy planning process to spatially map and quantify the radiosensitivity of bronchial segments, and propose novel IMRT planning that limits airway dose through non-isotropic intermediate- and low-dose spillage. Methods: Pre- and ∼8.5 months post-SAbR diagnostic-quality CT scans were retrospectively collected from six NSCLC patients (50–60Gy in 3–5 fractions). From each scan, ∼5 branching levels of the bronchial tree were segmented using LungPoint, a virtual bronchoscopic navigation system. The pre-SAbRmore » CT and the segmented bronchial tree were imported into the Eclipse treatment planning system and deformably registered to the planning CT. The five-fraction equivalent dose from the clinically-delivered plan was calculated for each segment using the Universal Survival Curve model. The pre- and post-SAbR CTs were used to evaluate radiation-induced segmental collapse. Two of six patients exhibited significant segmental collapse with associated atelectasis and fibrosis, and were re-planned using IMRT. Results: Multivariate stepwise logistic regression over six patients (81 segments) showed that D0.01cc (minimum point dose within the 0.01cc receiving highest dose) was a significant independent factor associated with collapse (odds-ratio=1.17, p=0.010). The D0.01cc threshold for collapse was 57Gy, above which, collapse rate was 45%. In the two patients exhibiting segmental collapse, 22 out of 32 segments showed D0.01cc >57Gy. IMRT re-planning reduced D0.01cc below 57Gy in 15 of the 22 segments (68%) while simultaneously achieving the original clinical plan objectives for PTV coverage and OAR-sparing. Conclusion: Our results indicate that the administration of lung SAbR can Result in significant injury to bronchial segments, potentially impairing post-SAbR lung function. To our knowledge, this is the first investigation of functional avoidance based on mapping and minimizing dose to individual bronchial segments. The presenting author receives research funding from Varian Medical Systems, Elekta, and VisionRT.« less

  2. The effect of particle size on the morphology and thermodynamics of diblock copolymer/tethered-particle membranes.

    PubMed

    Zhang, Bo; Edwards, Brian J

    2015-06-07

    A combination of self-consistent field theory and density functional theory was used to examine the effect of particle size on the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Particle size was varied between one and four tenths of the radius of gyration of the diblock polymer chain for neutral particles as well as those either favoring or disfavoring segments of the copolymer blocks. Phase diagrams were constructed and analyzed in terms of thermodynamic diagrams to understand the physics associated with the molecular-level self-assembly processes. Typical morphologies were observed, such as lamellar, spheroidal, cylindrical, gyroidal, and perforated lamellar, with the primary concentration region of the tethered particles being influenced heavily by particle size and tethering location, strength of the particle-segment energetic interactions, chain length, and copolymer radius of gyration. The effect of the simulation box size on the observed morphology and system thermodynamics was also investigated, indicating possible effects of confinement upon the system self-assembly processes.

  3. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  4. Seulimeum segment characteristic indicated by 2-D resistivity imaging method

    NASA Astrophysics Data System (ADS)

    Syukri, M.; Saad, R.

    2017-06-01

    The study conducted at Aceh (Indonesia) within Krueng Raya and Ie Seu Um vicinity with the same geology setting (Lam Teuba volcanic), to study Seulimeum Segment characteristic using 2-D resistivity imaging method. The 2-D resistivity survey applied Pole-dipole array with minimum electrode spacing of 2 and 5 m for Ie Seu Um study area, while 10 m for Krueng Raya area. Resistivity value of Ie Seu Um study area has been correlated and validated with existing outcrops and hot springs which the value used to identify overburden, saturated area and bedrock of Krueng Raya area. The resistivity value of overburden in Krueng Raya area was identify as <30 Ohm.m, bedrock is >30 Ohm.m and saturated zone is <9 Ohm.m. The imaging results used to identify the Seulimeum segment system, where the depth is increasing from southern part (20-50 m) to northern part (50-200 m) when approaching the Andaman Sea and breaks into two sections to produce horst and graben system which indicate that it produced from the moving plat.

  5. Man-made objects cuing in satellite imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2009-01-01

    We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively reduced levels of detail (LOOs). We are jumping off from the over-segmented image represented by polygons attributed with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka'smore » Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area monitoring to quickly guide the further analysis to regions of interest.« less

  6. G-NEST: A gene neighborhood scoring tool to identify co-conserved, co-expressed genes

    USDA-ARS?s Scientific Manuscript database

    In previous studies, gene neighborhoods--spatial clusters of co-expressed genes in the genome--have been defined using arbitrary rules such as requiring adjacency, a minimum number of genes, a fixed window size, or a minimum expression level. In the current study, we developed a Gene Neighborhood Sc...

  7. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  8. Characterization of silicon-gate CMOS/SOS integrated circuits processed with ion implantation

    NASA Technical Reports Server (NTRS)

    Woo, D. S.

    1982-01-01

    The procedure used to generate MEBES masks and produce test wafers from the 10X Mann 1600 Pattern Generator Tape using existing CAD utility programs and the MEBES machine in the RCA Solid State Technology Center are described. The test vehicle used is the MSFC-designed SC102 Solar House Timing Circuit. When transforming the Mann 1600 tapes into MEBES tapes, extreme care is required in order to obtain accurate minimum linewidths when working with two different coding systems because the minimum grid sizes may be different for the two systems. The minimum grid sizes are 0.025 mil for MSFC Mann 1600 and 0.02 mil for MEBES. Some snapping to the next grid is therefore inevitable, and the results of this snapping effect are significant when submicron lines are present. However, no problem was noticed in the SC102 circuit because its minimum linewidth is 0.3 mil (7.6 microns). MEBES masks were fabricated and wafers were processed using the silicon-gate CMOS/SOS and aluminum-gate COS/MOS processing.

  9. [C57BL/6 mice open field behaviour qualitatively depends on arena size].

    PubMed

    Lebedev, I V; Pleskacheva, M G; Anokhin, K V

    2012-01-01

    Open field behavior is well known to depend on physical characteristics of the apparatus. However many of such effects are poorly described especially with using of modern methods of behavioral registration and analysis. The previous results of experiments on the effect of arena size on behavior are not numerous and contradictory. We compared the behavioral scores of four groups of C57BL/6 mice in round open field arenas of four different sizes (diameter 35, 75, 150 and 220 cm). The behavior was registered and analyzed using Noldus EthoVision, WinTrack and SegmentAnalyzer software. A significant effect of arena size was found. Traveled distance and velocity increased, but not in proportion to increase of arena size. Moreover a significant effect on segment characteristics of the trajectory was revealed. Detailed behavior analysis revealed drastic differences in trajectory structure and number of rears between smaller (35 and 75 cm) and bigger (150 and 220 cm) arenas. We conclude, that the character of exploration in smaller and bigger arenas depends on relative size of central open zone in arena. Apparently its extension increases the motivational heterogeneity of space, that requires another than in smaller arenas, strategy of exploration.

  10. 42 CFR 84.205 - Facepiece test; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... respirator will be fitted to the faces of persons having varying facial shapes and sizes. (b) Where the applicant specifies a facepiece size or sizes for the respirator together with the approximate measurement..., pumping with a tire pump into a 28-liter (1 cubic-foot) container. (4) Each wearer shall not detect the...

  11. Minimum financial outlays for purchasing alcohol brands in the U.S.

    PubMed

    Albers, Alison Burke; DeJong, William; Naimi, Timothy S; Siegel, Michael; Shoaff, Jessica Ruhlman; Jernigan, David H

    2013-01-01

    Low alcohol prices are a potent risk factor for excessive drinking, underage drinking, and adverse alcohol-attributable outcomes. Presently, there is little reported information on alcohol prices in the U.S., in particular as it relates to the costs of potentially beneficial amounts of alcohol. To determine the minimum financial outlay necessary to purchase individual brands of alcohol using online alcohol price data from January through March 2012. The smallest container size and the minimum price at which that size beverage could be purchased in the U.S. in 2012 were determined for 898 brands of alcohol, across 17 different alcoholic beverage types. The analyses were conducted in March 2012. The majority of alcoholic beverage categories contain brands that can be purchased in the U.S. for very low minimum financial outlays. In the U.S., a wide variety of alcohol brands, across many types of alcohol, are available at very low prices. Given that both alcohol use and abuse are responsive to price, particularly among adolescents, the prevalence of low alcohol prices is concerning. Surveillance of alcohol prices and minimum pricing policies should be considered in the U.S. as part of a public health strategy to reduce excessive alcohol consumption and related harms. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  12. Minimum Financial Outlays for Purchasing Alcohol Brands in the U.S

    PubMed Central

    Albers, Alison Burke; DeJong, William; Naimi, Timothy S.; Siegel, Michael; Shoaff, Jessica Ruhlman; Jernigan, David H.

    2012-01-01

    Background Low alcohol prices are a potent risk factor for excessive drinking, underage drinking, and adverse alcohol-attributable outcomes. Presently, there is little reported information on alcohol prices in the U.S., in particular as it relates to the costs of potentially beneficial amounts of alcohol. Purpose To determine the minimum financial outlay necessary to purchase individual brands of alcohol using online alcohol price data from January through March 2012. Methods The smallest container size and the minimum price at which that size beverage could be purchased in the U.S. in 2012 were determined for 898 brands of alcohol, across 17 different alcoholic beverage types. The analyses were conducted in March 2012. Results The majority of alcoholic beverage categories contain brands that can be purchased in the U.S. for very low minimum financial outlays. Conclusions In the U.S., a wide variety of alcohol brands, across many types of alcohol, are available at very low prices. Given that both alcohol use and abuse are responsive to price, particularly among adolescents, the prevalence of low alcohol prices is concerning. Surveillance of alcohol prices and minimum pricing policies should be considered in the U.S. as part of a public health strategy to reduce excessive alcohol consumption and related harms. PMID:23253652

  13. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86  ±  0.03 for pediatric patient protocols, and 0.85  ±  0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.

  14. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  15. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  16. Lung tumor segmentation in PET images using graph cuts.

    PubMed

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. The segment as the minimal planning unit in speech production and reading aloud: evidence and implications.

    PubMed

    Kawamoto, Alan H; Liu, Qiang; Kello, Christopher T

    2015-01-01

    Speech production and reading aloud studies have much in common, especially the last stages involved in producing a response. We focus on the minimal planning unit (MPU) in articulation. Although most researchers now assume that the MPU is the syllable, we argue that it is at least as small as the segment based on negative response latencies (i.e., response initiation before presentation of the complete target) and longer initial segment durations in a reading aloud task where the initial segment is primed. We also discuss why such evidence was not found in earlier studies. Next, we rebut arguments that the segment cannot be the MPU by appealing to flexible planning scope whereby planning units of different sizes can be used due to individual differences, as well as stimulus and experimental design differences. We also discuss why negative response latencies do not arise in some situations and why anticipatory coarticulation does not preclude the segment MPU. Finally, we argue that the segment MPU is also important because it provides an alternative explanation of results implicated in the serial vs. parallel processing debate.

  18. A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.

    PubMed

    Pandis, Petros; Bull, Anthony Mj

    2017-11-01

    Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.

  19. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  20. Categorization of extremely brief auditory stimuli: domain-specific or domain-general processes?

    PubMed

    Bigand, Emmanuel; Delbé, Charles; Gérard, Yannick; Tillmann, Barbara

    2011-01-01

    The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes.

Top