Sample records for segment weight optimization

  1. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  2. Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate.

    PubMed

    Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan

    2009-01-01

    We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

  3. Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-03-01

    During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  5. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  6. Leaf position optimization for step-and-shoot IMRT.

    PubMed

    De Gersem, W; Claus, F; De Wagter, C; Van Duyse, B; De Neve, W

    2001-12-01

    To describe the theoretical basis, the algorithm, and implementation of a tool that optimizes segment shapes and weights for step-and-shoot intensity-modulated radiation therapy delivered by multileaf collimators. The tool, called SOWAT (Segment Outline and Weight Adapting Tool) is applied to a set of segments, segment weights, and corresponding dose distribution, computed by an external dose computation engine. SOWAT evaluates the effects of changing the position of each collimating leaf of each segment on an objective function, as follows. Changing a leaf position causes a change in the segment-specific dose matrix, which is calculated by a fast dose computation algorithm. A weighted sum of all segment-specific dose matrices provides the dose distribution and allows computation of the value of the objective function. Only leaf position changes that comply with the multileaf collimator constraints are evaluated. Leaf position changes that tend to decrease the value of the objective function are retained. After several possible positions have been evaluated for all collimating leaves of all segments, an external dose engine recomputes the dose distribution, based on the adapted leaf positions and weights. The plan is evaluated. If the plan is accepted, a segment sequencer is used to make the prescription files for the treatment machine. Otherwise, the user can restart SOWAT using the new set of segments, segment weights, and corresponding dose distribution. The implementation was illustrated using two example cases. The first example is a T1N0M0 supraglottic cancer case that was distributed as a multicenter planning exercise by investigators from Rotterdam, The Netherlands. The exercise involved a two-phase plan. Phase 1 involved the delivery of 46 Gy to a concave-shaped planning target volume (PTV) consisting of the primary tumor volume and the elective lymph nodal regions II-IV on both sides of the neck. Phase 2 involved a boost of 24 Gy to the primary tumor region only. SOWAT was applied to the Phase 1 plan. Parotid sparing was a planning goal. The second implementation example is an ethmoid sinus cancer case, planned with the intent of bilateral visus sparing. The median PTV prescription dose was 70 Gy with a maximum dose constraint to the optic pathway structures of 60 Gy. The initial set of segments, segment weights, and corresponding dose distribution were obtained, respectively, by an anatomy-based segmentation tool, a segment weight optimization tool, and a differential scatter-air ratio dose computation algorithm as external dose engine. For the supraglottic case, this resulted in a plan that proved to be comparable to the plans obtained at the other institutes by forward or inverse planning techniques. After using SOWAT, the minimum PTV dose and PTV dose homogeneity increased; the maximum dose to the spinal cord decreased from 38 Gy to 32 Gy. The left parotid mean dose decreased from 22 Gy to 19 Gy and the right parotid mean dose from 20 to 18 Gy. For the ethmoid sinus case, the target homogeneity increased by leaf position optimization, together with a better sparing of the optical tracts. By using SOWAT, the plans improved with respect to all plan evaluation end points. Compliance with the multileaf collimator constraints is guaranteed. The treatment delivery time remains almost unchanged, because no additional segments are created.

  7. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  8. Direct aperture optimization using an inverse form of back-projection.

    PubMed

    Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X

    2014-03-06

    Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.

  9. Discriminative spatial-frequency-temporal feature extraction and classification of motor imagery EEG: An sparse regression and Weighted Naïve Bayesian Classifier-based approach.

    PubMed

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang

    2017-02-15

    Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  11. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  12. Weights and topology: a study of the effects of graph construction on 3D image segmentation.

    PubMed

    Grady, Leo; Jolly, Marie-Pierre

    2008-01-01

    Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.

  13. In-plane structuring of proton exchange membrane fuel cell cathodes: Effect of ionomer equivalent weight structuring on performance and current density distribution

    NASA Astrophysics Data System (ADS)

    Herden, Susanne; Riewald, Felix; Hirschfeld, Julian A.; Perchthaler, Markus

    2017-07-01

    Within the active area of a fuel cell inhomogeneous operating conditions occur, however, state of the art electrodes are homogenous over the complete active area. This study uses current density distribution measurements to analyze which ionomer equivalent weight (EW) shows locally the highest current densities. With this information a segmented cathode electrode is manufactured by decal transfer. The segmented electrode shows better performance especially at high current densities compared to homogenous electrodes. Furthermore this segmented catalyst coated membrane (CCM) performs optimal in wet as well as dry conditions, both operating conditions arise in automotive fuel cell applications. Thus, cathode electrodes with an optimized ionomer EW distribution might have a significant impact on future automotive fuel cell development.

  14. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  15. Seamline Determination Based on PKGC Segmentation for Remote Sensing Image Mosaicking

    PubMed Central

    Dong, Qiang; Liu, Jinghong

    2017-01-01

    This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of parametric kernel graph cuts (PKGC) segmentation. The global cost map which consists of color difference, a multi-scale morphological gradient (MSMG) constraint, and texture difference is weighted by BRs. Finally, the seamline is determined in the weighted cost from the start point to the end point. Dijkstra’s shortest path algorithm is adopted for pixel-level optimization to determine the positions of seamline. Meanwhile, a new seamline optimization strategy is proposed for image mosaicking with multi-image overlapping regions. The experimental results show the better performance than the conventional method based on mean-shift segmentation. Seamlines based on the proposed method bypass the obvious objects and take less time in execution. This new method is efficient and superior for seamline determination in remote sensing image mosaicking. PMID:28749446

  16. Acoustic-noise-optimized diffusion-weighted imaging.

    PubMed

    Ott, Martin; Blaimer, Martin; Grodzki, David M; Breuer, Felix A; Roesch, Julie; Dörfler, Arnd; Heismann, Björn; Jakob, Peter M

    2015-12-01

    This work was aimed at reducing acoustic noise in diffusion-weighted MR imaging (DWI) that might reach acoustic noise levels of over 100 dB(A) in clinical practice. A diffusion-weighted readout-segmented echo-planar imaging (EPI) sequence was optimized for acoustic noise by utilizing small readout segment widths to obtain low gradient slew rates and amplitudes instead of faster k-space coverage. In addition, all other gradients were optimized for low slew rates. Volunteer and patient imaging experiments were conducted to demonstrate the feasibility of the method. Acoustic noise measurements were performed and analyzed for four different DWI measurement protocols at 1.5T and 3T. An acoustic noise reduction of up to 20 dB(A) was achieved, which corresponds to a fourfold reduction in acoustic perception. The image quality was preserved at the level of a standard single-shot (ss)-EPI sequence, with a 27-54% increase in scan time. The diffusion-weighted imaging technique proposed in this study allowed a substantial reduction in the level of acoustic noise compared to standard single-shot diffusion-weighted EPI. This is expected to afford considerably more patient comfort, but a larger study would be necessary to fully characterize the subjective changes in patient experience.

  17. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  18. Irradiation of the prostate and pelvic lymph nodes with an adaptive algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, A. B.; Chen, J.; Nguyen, T. B.

    2012-02-15

    Purpose: The simultaneous treatment of pelvic lymph nodes and the prostate in radiotherapy for prostate cancer is complicated by the independent motion of these two target volumes. In this work, the authors study a method to adapt intensity modulated radiation therapy (IMRT) treatment plans so as to compensate for this motion by adaptively morphing the multileaf collimator apertures and adjusting the segment weights. Methods: The study used CT images, tumor volumes, and normal tissue contours from patients treated in our institution. An IMRT treatment plan was then created using direct aperture optimization to deliver 45 Gy to the pelvic lymphmore » nodes and 50 Gy to the prostate and seminal vesicles. The prostate target volume was then shifted in either the anterior-posterior direction or in the superior-inferior direction. The treatment plan was adapted by adjusting the aperture shapes with or without re-optimizing the segment weighting. The dose to the target volumes was then determined for the adapted plan. Results: Without compensation for prostate motion, 1 cm shifts of the prostate resulted in an average decrease of 14% in D-95%. If the isocenter is simply shifted to match the prostate motion, the prostate receives the correct dose but the pelvic lymph nodes are underdosed by 14% {+-} 6%. The use of adaptive morphing (with or without segment weight optimization) reduces the average change in D-95% to less than 5% for both the pelvic lymph nodes and the prostate. Conclusions: Adaptive morphing with and without segment weight optimization can be used to compensate for the independent motion of the prostate and lymph nodes when combined with daily imaging or other methods to track the prostate motion. This method allows the delivery of the correct dose to both the prostate and lymph nodes with only small changes to the dose delivered to the target volumes.« less

  19. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  20. Analysis of Intergrade Variables In The Fuzzy C-Means And Improved Algorithm Cat Swarm Optimization(FCM-ISO) In Search Segmentation

    NASA Astrophysics Data System (ADS)

    Saragih, Jepronel; Salim Sitompul, Opim; Situmorang, Zakaria

    2017-12-01

    One of the techniques known in Data Mining namely clustering. Image segmentation process does not always represent the actual image which is caused by a combination of algorithms as long as it has not been able to obtain optimal cluster centers. In this research will search for the smallest error with the counting result of a Fuzzy C Means process optimized with Cat swam Algorithm Optimization that has been developed by adding the weight of the energy in the process of Tracing Mode.So with the parameter can be determined the most optimal cluster centers and most closely with the data will be made the cluster. Weigh inertia in this research, namely: (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), (0.7), (0.8) and (0.9). Then compare the results of each variable values inersia (W) which is different and taken the smallest results. Of this weighting analysis process can acquire the right produce inertia variable cost function the smallest.

  1. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  2. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    PubMed

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  3. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution

    PubMed Central

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-01-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233

  4. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-10-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.

  5. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Resonant Raman scattering of controlled molecular weight polyacetylene

    NASA Astrophysics Data System (ADS)

    Schen, M. A.; Chien, J. C. W.; Perrin, E.; Lefrant, S.; Mulazzi, E.

    1988-12-01

    Polyacetylene, (CH)x, films of 500, 5300, 10 500, and 100 000 Daltons number average molecular weights (Mn ) were synthesized using the titanium tetra-n-butoxide/triethyl aluminum-catalyst/cocatalyst system and examined using resonant Raman scattering techniques. Before isomerization, trans segments are found to exist mainly as short, isolated sequences independent of Mn. After thermal isomerization, theoretical analysis of the RRS spectra using the Brivio, Mulazzi model indicate the ratio of long trans conjugated segments (N≥30) to short trans conjugated segments (N≤30) is significantly larger for 100 000 Dalton polymer in comparison to polymer of 10 500 Mn and below. For samples below 10 500 Daltons, no clear relationship between actual polymer molecular weight and G is observed. Optimization of the isomerization conditions for 100 000 Dalton polymer results in trans-(CH)x with a G=0.80. These results suggest that not until very long molecular chains are obtained can samples composed principally of long conjugated segments be obtained. It is proposed that defects which arise during and after the polymerization limit the content of long segments. Ambient, short term oxidation of 100 000 Mn polymer shows a decrease in G from 0.80 to 0.70. Low level chain oxidation or doping is shown to preferentially occur within long conjugated segments.

  7. Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.

    1996-01-01

    A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.

  8. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  9. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  10. Surface-region context in optimal multi-object graph-based segmentation: robust delineation of pulmonary tumors.

    PubMed

    Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong

    2011-01-01

    Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.

  11. Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI.

    PubMed

    Sauwen, N; Acou, M; Van Cauter, S; Sima, D M; Veraart, J; Maes, F; Himmelreich, U; Achten, E; Van Huffel, S

    2016-01-01

    Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs), as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI) is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI) and magnetic resonance spectroscopic imaging (MRSI) have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI) approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.

  12. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  13. Ionomer equivalent weight structuring in the cathode catalyst layer of automotive fuel cells: Effect on performance, current density distribution and electrochemical impedance spectra

    NASA Astrophysics Data System (ADS)

    Herden, Susanne; Hirschfeld, Julian A.; Lohri, Cyrill; Perchthaler, Markus; Haase, Stefan

    2017-10-01

    To improve the performance of proton exchange membrane fuel cells, membrane electrode assemblies (MEAs) with segmented cathode electrodes have been manufactured. Electrodes with a higher and lower ionomer equivalent weight (EW) were used and analyzed using current density and temperature distribution, polarization curve, temperature sweep and electrochemical impedance spectroscopy measurements. These were performed using automotive metallic bipolar plates and operating conditions. Measurement data were used to manufacture an optimized segmented cathode electrode. We were able to show that our results are transferable from a small scale hardware to automotive application and that an ionomer EW segmentation of the cathode leads to performance improvement in a broad spectrum of operating conditions. Furthermore, we confirmed our results by using in-situ electrochemical impedance spectroscopy.

  14. [A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].

    PubMed

    Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong

    2011-06-01

    For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.

  15. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  16. Optimal Design of Grid-Stiffened Panels and Shells With Variable Curvature

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin

    2001-01-01

    A design strategy for optimal design of composite grid-stiffened structures with variable curvature subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. Stiffening configuration is herein defined as a design variable that indicates the combination of axial, transverse and diagonal stiffeners in the stiffened panel. The design optimization process is adapted to identify the lightest-weight stiffening configuration and stiffener spacing for grid-stiffened composite panels given the overall panel dimensions. in-plane design loads, material properties. and boundary conditions of the grid-stiffened panel or shell.

  17. MRI segmentation using dialectical optimization.

    PubMed

    dos Santos, Wellington P; de Assis, Francisco M; de Souza, Ricardo E

    2009-01-01

    Biology, Psychology and Social Sciences are intrinsically connected to the very roots of the development of algorithms and methods in Computational Intelligence, as it is easily seen in approaches like genetic algorithms, evolutionary programming and particle swarm optimization. In this work we propose a new optimization method based on dialectics using fuzzy membership functions to model the influence of interactions between integrating poles in the status of each pole. Poles are the basic units composing dialectical systems. In order to validate our proposal we designed a segmentation method based on the optimization of k-means using dialectics for the segmentation of MR images. As a case study we used 181 MR synthetic multispectral images composed by proton density, T(1)- and T(2)-weighted synthetic brain images of 181 slices with 1 mm, resolution of 1 mm(3), for a normal brain and a noiseless MR tomographic system without field inhomogeneities, amounting a total of 543 images, generated by the simulator BrainWeb [2]. Our principal target here is comparing our proposal to k-means, fuzzy c-means, and Kohonen's self-organized maps, concerning the quantization error, we proved that our method can improved results obtained using k-means.

  18. TU-AB-303-12: Towards Inter and Intra Fraction Plan Adaptation for the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontaxis, C; Bol, G; Lagendijk, J

    Purpose: To develop a new sequencer for IMRT that during treatment can account for anatomy changes provided by online and real-time MRI. This sequencer employs a novel inter and intra fraction scheme that converges to the prescribed dose without a final segment weight optimization (SWO) and enables immediate optimization and delivery of radiation adapted to the deformed anatomy. Methods: The sequencer is initially supplied with a voxel-based dose prescription and during the optimization iteratively generates segments that provide this prescribed dose. Every iteration selects the best segment for the current anatomy state, calculates the dose it will deliver, warps itmore » back to the reference prescription grid and subtracts it from the remaining prescribed dose. This process continues until a certain percentage of dose or a number of segments has been delivered. The anatomy changes that occur during treatment require that convergence is achieved without a final SWO. This is resolved by adding the difference between the prescribed and delivered dose up to this fraction to the prescription of the subsequent fraction. This process is repeated for all fractions of the treatment. Results: Two breast cases were selected to stress test the pipeline by producing artificial inter and intra fraction anatomy deformations using a combination of incrementally applied rigid transformations. The dose convergence of the adaptive scheme over the entire treatment, relative to the prescribed dose, was on average 8.6% higher than the static plans delivered to the respective deformed anatomies and only 1.6% less than the static segment weighted plans on the static anatomy. Conclusion: This new adaptive sequencing strategy enables dose convergence without the need of SWO while adapting the plan to intermediate anatomies, which is a prerequisite for online plan adaptation. We are now testing our pipeline on prostate cases using clinical anatomy deformation data from our department. This work is financially supported by Elekta AB, Stockholm, Sweden.« less

  19. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  20. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  1. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-04-01

    We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.

  2. Accounting for the Confound of Meninges in Segmenting Entorhinal and Perirhinal Cortices in T1-Weighted MRI.

    PubMed

    Xie, Long; Wisse, Laura E M; Das, Sandhitsu R; Wang, Hongzhi; Wolk, David A; Manjón, Jose V; Yushkevich, Paul A

    2016-10-01

    Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer's disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.

  3. Comparison of atlas-based techniques for whole-body bone segmentation.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2017-02-01

    We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted bone obtained from both SBA (DSC= 0.56 ± 0.05) and Hofmann's methods (DSC= 0.60 ± 0.02) exhibited no improvement compared to non-weighted IA. Finding the optimum parameters for implementation of the atlas fusion approach, such as weighting factors and image similarity patch size, have great impact on the performance of atlas-based segmentation approaches. The voxel-wise atlas fusion approach exhibited excellent performance in terms of cancelling out the non-systematic registration errors leading to accurate and reliable segmentation results. Denoising and normalization of MR images together with optimization of the involved parameters play a key role in improving bone extraction accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  5. CERES: A new cerebellum lobule segmentation method.

    PubMed

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  6. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  7. 3D MR ventricle segmentation in pre-term infants with post-hemorrhagic ventricle dilation

    NASA Astrophysics Data System (ADS)

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; Chen, Yimin; de Ribaupierre, Sandrine; Chiu, Bernard; Fenster, Aaron

    2015-03-01

    Intraventricular hemorrhage (IVH) or bleed within the brain is a common condition among pre-term infants that occurs in very low birth weight preterm neonates. The prognosis is further worsened by the development of progressive ventricular dilatation, i.e., post-hemorrhagic ventricle dilation (PHVD), which occurs in 10-30% of IVH patients. In practice, predicting PHVD accurately and determining if that specific patient with ventricular dilatation requires the ability to measure accurately ventricular volume. While monitoring of PHVD in infants is typically done by repeated US and not MRI, once the patient has been treated, the follow-up over the lifetime of the patient is done by MRI. While manual segmentation is still seen as a gold standard, it is extremely time consuming, and therefore not feasible in a clinical context, and it also has a large inter- and intra-observer variability. This paper proposes a segmentation algorithm to extract the cerebral ventricles from 3D T1- weighted MR images of pre-term infants with PHVD. The proposed segmentation algorithm makes use of the convex optimization technique combined with the learned priors of image intensities and label probabilistic map, which is built from a multi-atlas registration scheme. The leave-one-out cross validation using 7 PHVD patient T1 weighted MR images showed that the proposed method yielded a mean DSC of 89.7% +/- 4.2%, a MAD of 2.6 +/- 1.1 mm, a MAXD of 17.8 +/- 6.2 mm, and a VD of 11.6% +/- 5.9%, suggesting a good agreement with manual segmentations.

  8. SU-E-T-250: New IMRT Sequencing Strategy: Towards Intra-Fraction Plan Adaptation for the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontaxis, C; Bol, G; Lagendijk, J

    2014-06-01

    Purpose: To develop a new sequencer for IMRT planning that during treatment makes the inclusion of external factors possible and by doing so accounts for intra-fraction anatomy changes. Given a real-time imaging modality that will provide the updated patient anatomy during delivery, this sequencer is able to take these changes into account during the calculation of subsequent segments. Methods: Pencil beams are generated for each beam angle of the treatment and a fluence optimization is performed. The pencil beams, together with the patient anatomy and the above optimal fluence form the input of our algorithm. During each iteration the followingmore » steps are performed: A fluence optimization is done and each beam's fluence is then split to discrete intensity levels. Deliverable segments are calculated for each one of these. Each segment's area multiplied by its intensity describes its efficiency. The most efficient segment among all beams is then chosen to deliver a part of the calculated fluence and the dose that will be delivered by this segment is calculated. This delivered dose is then subtracted from the remaining dose. This loop is repeated until 90% of the dose has been delivered and a final segment weight optimization is performed to reach full convergence. Results: This algorithm was tested in several prostate cases yielding results that meet all clinical constraints. Quality assurance was performed on Delta4 and film phantoms for one of these prostate cases and received clinical acceptance after passing both gamma analyses with the 3%/3mm criteria. Conclusion: A new sequencing algorithm was developed to facilitate the needs of intensity modulated treatment. The first results on static anatomy confirm that it can calculate clinical plans equivalent to those of the commercially available planning systems. We are now working towards 100% dose convergence which will allow us to handle anatomy deformations. This work is financially supported by Elekta AB, Stockholm, Sweden.« less

  9. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  10. Segmentation of thalamus from MR images via task-driven dictionary learning

    NASA Astrophysics Data System (ADS)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.

    2016-03-01

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.

  11. Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.

    PubMed

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L

    2016-02-27

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.

  12. Automated pixel-wise brain tissue segmentation of diffusion-weighted images via machine learning.

    PubMed

    Ciritsis, Alexander; Boss, Andreas; Rossi, Cristina

    2018-04-26

    The diffusion-weighted (DW) MR signal sampled over a wide range of b-values potentially allows for tissue differentiation in terms of cellularity, microstructure, perfusion, and T 2 relaxivity. This study aimed to implement a machine learning algorithm for automatic brain tissue segmentation from DW-MRI datasets, and to determine the optimal sub-set of features for accurate segmentation. DWI was performed at 3 T in eight healthy volunteers using 15 b-values and 20 diffusion-encoding directions. The pixel-wise signal attenuation, as well as the trace and fractional anisotropy (FA) of the diffusion tensor, were used as features to train a support vector machine classifier for gray matter, white matter, and cerebrospinal fluid classes. The datasets of two volunteers were used for validation. For each subject, tissue classification was also performed on 3D T 1 -weighted data sets with a probabilistic framework. Confusion matrices were generated for quantitative assessment of image classification accuracy in comparison with the reference method. DWI-based tissue segmentation resulted in an accuracy of 82.1% on the validation dataset and of 82.2% on the training dataset, excluding relevant model over-fitting. A mean Dice coefficient (DSC) of 0.79 ± 0.08 was found. About 50% of the classification performance was attributable to five features (i.e. signal measured at b-values of 5/10/500/1200 s/mm 2 and the FA). This reduced set of features led to almost identical performances for the validation (82.2%) and the training (81.4%) datasets (DSC = 0.79 ± 0.08). Machine learning techniques applied to DWI data allow for accurate brain tissue segmentation based on both morphological and functional information. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Applying a new unequally weighted feature fusion method to improve CAD performance of classifying breast lesions

    NASA Astrophysics Data System (ADS)

    Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.

  14. Pareto-front shape in multiobservable quantum control

    NASA Astrophysics Data System (ADS)

    Sun, Qiuyang; Wu, Re-Bing; Rabitz, Herschel

    2017-03-01

    Many scenarios in the sciences and engineering require simultaneous optimization of multiple objective functions, which are usually conflicting or competing. In such problems the Pareto front, where none of the individual objectives can be further improved without degrading some others, shows the tradeoff relations between the competing objectives. This paper analyzes the Pareto-front shape for the problem of quantum multiobservable control, i.e., optimizing the expectation values of multiple observables in the same quantum system. Analytic and numerical results demonstrate that with two commuting observables the Pareto front is a convex polygon consisting of flat segments only, while with noncommuting observables the Pareto front includes convexly curved segments. We also assess the capability of a weighted-sum method to continuously capture the points along the Pareto front. Illustrative examples with realistic physical conditions are presented, including NMR control experiments on a 1H-13C two-spin system with two commuting or noncommuting observables.

  15. TARPARE: a method for selecting target audiences for public health interventions.

    PubMed

    Donovan, R J; Egger, G; Francas, M

    1999-06-01

    This paper presents a model to assist the health promotion practitioner systematically compare and select what might be appropriate target groups when there are a number of segments competing for attention and resources. TARPARE assesses previously identified segments on the following criteria: T: The Total number of persons in the segment; AR: The proportion of At Risk persons in the segment; P: The Persuability of the target audience; A: The Accessibility of the target audience; R: Resources required to meet the needs of the target audience; and E: Equity, social justice considerations. The assessment can be applied qualitatively or can be applied such that scores can be assigned to each segment. Two examples are presented. TARPARE is a useful and flexible model for understanding the various segments in a population of interest and for assessing the potential viability of interventions directed at each segment. The model is particularly useful when there is a need to prioritise segments in terms of available budgets. The model provides a disciplined approach to target selection and forces consideration of what weights should be applied to the different criteria, and how these might vary for different issues or for different objectives. TARPARE also assesses segments in terms of an overall likelihood of optimal impact for each segment. Targeting high scoring segments is likely to lead to greater program success than targeting low scoring segments.

  16. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.

  17. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  19. Quantitative learning strategies based on word networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng

    2018-02-01

    Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.

  20. Image segmentation with a novel regularized composite shape prior based on surrogate study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less

  1. Segmentation of dermoscopy images using wavelet networks.

    PubMed

    Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeed; Gheissari, Niloofar; Mokhtari, Mojgan; Kolahdouzan, Farzaneh

    2013-04-01

    This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems.

  2. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.

  3. Evaluation of multimodal segmentation based on 3D T1-, T2- and FLAIR-weighted images - the difficulty of choosing.

    PubMed

    Lindig, Tobias; Kotikalapudi, Raviteja; Schweikardt, Daniel; Martin, Pascal; Bender, Friedemann; Klose, Uwe; Ernemann, Ulrike; Focke, Niels K; Bender, Benjamin

    2018-04-15

    Voxel-based morphometry is still mainly based on T1-weighted MRI scans. Misclassification of vessels and dura mater as gray matter has been previously reported. Goal of the present work was to evaluate the effect of multimodal segmentation methods available in SPM12, and their influence on identification of age related atrophy and lesion detection in epilepsy patients. 3D T1-, T2- and FLAIR-images of 77 healthy adults (mean age 35.8 years, 19-66 years, 45 females), 7 patients with malformation of cortical development (MCD) (mean age 28.1 years,19-40 years, 3 females), and 5 patients with left hippocampal sclerosis (LHS) (mean age 49.0 years, 25-67 years, 3 females) from a 3T scanner were evaluated. Segmentation based on T1-only, T1+T2, T1+FLAIR, T2+FLAIR, and T1+T2+FLAIR were compared in the healthy subjects. Clinical VBM results based on the different segmentation approaches for MCD and for LHS were compared. T1-only segmentation overestimated total intracranial volume by about 80ml compared to the other segmentation methods. This was due to misclassification of dura mater and vessels as GM and CSF. Significant differences were found for several anatomical regions: the occipital lobe, the basal ganglia/thalamus, the pre- and postcentral gyrus, the cerebellum, and the brainstem. None of the segmentation methods yielded completely satisfying results for the basal ganglia/thalamus and the brainstem. The best correlation with age could be found for the multimodal T1+T2+FLAIR segmentation. Highest T-scores for identification of LHS were found for T1+T2 segmentation, while highest T-scores for MCD were dependent on lesion and anatomical location. Multimodal segmentation is superior to T1-only segmentation and reduces the misclassification of dura mater and vessels as GM and CSF. Depending on the anatomical region and the pathology of interest (atrophy, lesion detection, etc.), different combinations of T1, T2 and FLAIR yield optimal results. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  5. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  6. Finite-element design and optimization of a three-dimensional tetrahedral porous titanium scaffold for the reconstruction of mandibular defects.

    PubMed

    Luo, Danmei; Rong, Qiguo; Chen, Quan

    2017-09-01

    Reconstruction of segmental defects in the mandible remains a challenge for maxillofacial surgery. The use of porous scaffolds is a potential method for repairing these defects. Now, additive manufacturing techniques provide a solution for the fabrication of porous scaffolds with specific geometrical shapes and complex structures. The goal of this study was to design and optimize a three-dimensional tetrahedral titanium scaffold for the reconstruction of mandibular defects. With a fixed strut diameter of 0.45mm and a mean cell size of 2.2mm, a tetrahedral structural porous scaffold was designed for a simulated anatomical defect derived from computed tomography (CT) data of a human mandible. An optimization method based on the concept of uniform stress was performed on the initial scaffold to realize a minimal-weight design. Geometric and mechanical comparisons between the initial and optimized scaffold show that the optimized scaffold exhibits a larger porosity, 81.90%, as well as a more homogeneous stress distribution. These results demonstrate that tetrahedral structural titanium scaffolds are feasible structures for repairing mandibular defects, and that the proposed optimization scheme has the ability to produce superior scaffolds for mandibular reconstruction with better stability, higher porosity, and less weight. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  8. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  9. Estimating the weight of crown segments for old-growth Douglas-fir and western hemlock.

    Treesearch

    J.A. Kendall Snell; Timothy A. Max

    1985-01-01

    The purpose of this study was to develop and validate estimators to predict total crown weight and weight of any segment of crown for old-growth felled and bucked Douglas-fir and western hemlock trees. Equations were developed for predicting weight of continuous live crown, total live crown, dead crown, any segment of live crown, and individual branches for old-growth...

  10. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  11. Dual optimization based prostate zonal segmentation in 3D MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-05-01

    Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation.

    PubMed

    Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu

    2017-05-01

    This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.

  13. A Method for Optimizing Non-Axisymmetric Liners for Multimodal Sound Sources

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Jones, M. G.; Parrott, T. L.; Sobieski, J.

    2002-01-01

    Central processor unit times and memory requirements for a commonly used solver are compared to that of a state-of-the-art, parallel, sparse solver. The sparse solver is then used in conjunction with three constrained optimization methodologies to assess the relative merits of non-axisymmetric versus axisymmetric liner concepts for improving liner acoustic suppression. This assessment is performed with a multimodal noise source (with equal mode amplitudes and phases) in a finite-length rectangular duct without flow. The sparse solver is found to reduce memory requirements by a factor of five and central processing time by a factor of eleven when compared with the commonly used solver. Results show that the optimum impedance of the uniform liner is dominated by the least attenuated mode, whose attenuation is maximized by the Cremer optimum impedance. An optimized, four-segmented liner with impedance segments in a checkerboard arrangement is found to be inferior to an optimized spanwise segmented liner. This optimized spanwise segmented liner is shown to attenuate substantially more sound than the optimized uniform liner and tends to be more effective at the higher frequencies. The most important result of this study is the discovery that when optimized, a spanwise segmented liner with two segments gives attenuations equal to or substantially greater than an optimized axially segmented liner with the same number of segments.

  14. Ares I-X Test Flight Reference Trajectory Development

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Gumbert, Clyde R.; Tartabini, Paul V.

    2011-01-01

    Ares I-X was the first test flight of NASA's Constellation Program's Ares I crew launch vehicle. Ares I is a two stage to orbit launch vehicle that provides crew access to low Earth orbit for NASA's future manned exploration missions. The Ares I first stage consists of a Shuttle solid rocket motor (SRM) modified to include an additional propellant segment and a liquid propellant upper stage with an Apollo J2X engine modified to increase its thrust capability. The modified propulsion systems were not available for the first test flight, thus the test had to be conducted with an existing Shuttle 4 segment reusable solid rocket motor (RSRM) and an inert Upper Stage. The test flight's primary objective was to demonstrate controllability of an Ares I vehicle during first stage boost and the ability to perform a successful separation. In order to demonstrate controllability, the Ares I-X ascent control algorithms had to maintain stable flight throughout a flight environment equivalent to Ares I. The goal of the test flight reference trajectory development was to design a boost trajectory using the existing RSRM that results in a flight environment equivalent to Ares I. A trajectory similarity metric was defined as the integrated difference between the Ares I and Ares I-X Mach versus dynamic pressure relationships. Optimization analyses were performed that minimized the metric by adjusting the inert upper stage weight and the ascent steering profile. The sensitivity of the optimal upper stage weight and steering profile to launch month was also investigated. A response surface approach was used to verify the optimization results. The analyses successfully defined monthly ascent trajectories that matched the Ares I reference trajectory dynamic pressure versus Mach number relationship to within 10% through Mach 3.5. The upper stage weight required to achieve the match was found to be feasible and varied less than 5% throughout the year. The paper will discuss the flight test requirements, provide Ares I-X vehicle background, discuss the optimization analyses used to meet the requirements, present analysis results, and compare the reference trajectory to the reconstructed flight trajectory.

  15. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  16. Optimal trajectories for the aeroassisted flight experiment. Part 4: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, A.; Wang, T.; Lee, W. Y.; Wang, H.; Wu, G. D.

    1989-01-01

    The determination of optimal trajectories for the aeroassisted flight experiment (AFE) is discussed. Data, tables, and graphs relative to the following transfers are presented: (IA) indirect ascent to a 178 NM perigee via a 197 NM apogee; and (DA) direct ascent to a 178 NM apogee. For both transfers, two cases are investigated: (1) the bank angle is continuously variable; and (2) the trajectory is divided into segments along which the bank angle is constant. For case (2), the following subcases are studied: two segments, three segments, four segments, and five segments; because the time duration of each segment is optimized, the above subcases involve four, six, eight, and ten parameters, respectively. Presented here are systematic data on a total of ten optimal trajectories (OT), five for Transfer IA and five for Transfer DA. For comparison purposes and only for Transfer IA, a five-segment reference trajectory RT is also considered.

  17. Coronary artery analysis: Computer-assisted selection of best-quality segments in multiple-phase coronary CT angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-

    Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are thenmore » aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment quality. The performance of our automated method was evaluated by comparing the automatically identified best-quality segments identified by the computer to those selected by the observers. Results: For the 20 test cases, 254 groups of corresponding vessel segments were identified after multiple phase registration and recursive matching. The AI-BQ segments agreed with the radiologist’s top 2 ranked segments in 78.3% of the 254 groups (Cohen’s kappa 0.60), and with the 4 nonradiologist observers in 76.8%, 84.3%, 83.9%, and 85.8% of the 254 groups. In addition, 89.4% of the AI-BQ segments agreed with at least two observers’ top 2 rankings, and 96.5% agreed with at least one observer’s top 2 rankings. In comparison, agreement between the four observers’ top ranked segment and the radiologist’s top 2 ranked segments were 79.9%, 80.7%, 82.3%, and 76.8%, respectively, with kappa values ranging from 0.56 to 0.68. Conclusions: The performance of our automated method for selecting the best-quality coronary segments from a multiple-phase cCTA acquisition was comparable to the selection made by human observers. This study demonstrates the potential usefulness of the automated method in clinical practice, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases and minimizing the variability in image phase selection for evaluation of coronary artery segments across the diversity of human readers with variations in expertise.« less

  18. Effect of Branching on Rod-coil Polyimides as Membrane Materials for Lithium Polymer Batteries

    NASA Technical Reports Server (NTRS)

    Meador, Mary Ann B.; Cubon, Valerie A.; Scheiman, Daniel A.; Bennett, William R.

    2003-01-01

    This paper describes a series of rod-coil block co-polymers that produce easy to fabricate, dimensionally stable films with good ionic conductivity down to room temperature for use as electrolytes for lithium polymer batteries. The polymers consist of short, rigid rod polyimide segments, alternating with flexible, polyalkylene oxide coil segments. The highly incompatible rods and coils should phase separate, especially in the presence of lithium ions. The coil phase would allow for conduction of lithium ions, while the rigid rod phase would provide a high degree of dimensional stability. An optimization study was carried out to study the effect of four variables (degree of branching, formulated molecular weight, polymerization solvent and lithium salt concentration) on ionic conductivity, glass transition temperature and dimensional stability in this system.

  19. Development of optimized segmentation map in dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  20. Finite grade pheromone ant colony optimization for image segmentation

    NASA Astrophysics Data System (ADS)

    Yuanjing, F.; Li, Y.; Liangjun, K.

    2008-06-01

    By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.

  1. Study of process parameter on mist lubrication of Titanium (Grade 5) alloy

    NASA Astrophysics Data System (ADS)

    Maity, Kalipada; Pradhan, Swastik

    2017-02-01

    This paper deals with the machinability of Ti-6Al-4V alloy with mist cooling lubrication using carbide inserts. The influence of process parameter on the cutting forces, evolution of tool wear, surface finish of the workpiece, material removal rate and chip reduction coefficient have been investigated. Weighted principal component analysis coupled with grey relational analysis optimization is applied to identify the optimum setting of the process parameter. Optimal condition of the process parameter was cutting speed at 160 m/min, feed at 0.16 mm/rev and depth of cut at 1.6 mm. Effects of cutting speed and depth of cut on the type of chips formation were observed. Most of the chips forms were long tubular and long helical type. Image analyses of the segmented chip were examined to study the shape and size of the saw tooth profile of serrated chips. It was found that by increasing cutting speed from 95 m/min to 160 m/min, the free surface lamella of the chips increased and the visibility of the saw tooth segment became clearer.

  2. Multi-Patches IRIS Based Person Authentication System Using Particle Swarm Optimization and Fuzzy C-Means Clustering

    NASA Astrophysics Data System (ADS)

    Shekar, B. H.; Bhat, S. S.

    2017-05-01

    Locating the boundary parameters of pupil and iris and segmenting the noise free iris portion are the most challenging phases of an automated iris recognition system. In this paper, we have presented person authentication frame work which uses particle swarm optimization (PSO) to locate iris region and circular hough transform (CHT) to device the boundary parameters. To undermine the effect of the noise presented in the segmented iris region we have divided the candidate region into N patches and used Fuzzy c-means clustering (FCM) to classify the patches into best iris region and not so best iris region (noisy region) based on the probability density function of each patch. Weighted mean Hammimng distance is adopted to find the dissimilarity score between the two candidate irises. We have used Log-Gabor, Riesz and Taylor's series expansion (TSE) filters and combinations of these three for iris feature extraction. To justify the feasibility of the proposed method, we experimented on the three publicly available data sets IITD, MMU v-2 and CASIA v-4 distance.

  3. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heeswijk, Miriam M. van; Department of Surgery, Maastricht University Medical Centre, Maastricht; Lambregts, Doenja M.J., E-mail: d.lambregts@nki.nl

    Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained bymore » method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. Conclusions: DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer.« less

  4. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    PubMed

    van Heeswijk, Miriam M; Lambregts, Doenja M J; van Griethuysen, Joost J M; Oei, Stanley; Rao, Sheng-Xiang; de Graaff, Carla A M; Vliegen, Roy F A; Beets, Geerard L; Papanikolaou, Nikos; Beets-Tan, Regina G H

    2016-03-15

    Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Treatment planning systems for external whole brain radiation therapy: With and without MLC (multi leaf collimator) optimization

    NASA Astrophysics Data System (ADS)

    Budiyono, T.; Budi, W. S.; Hidayanto, E.

    2016-03-01

    Radiation therapy for brain malignancy is done by giving a dose of radiation to a whole volume of the brain (WBRT) followed by a booster at the primary tumor with more advanced techniques. Two external radiation fields given from the right and left side. Because the shape of the head, there will be an unavoidable hotspot radiation dose of greater than 107%. This study aims to optimize planning of radiation therapy using field in field multi-leaf collimator technique. A study of 15 WBRT samples with CT slices is done by adding some segments of radiation in each field of radiation and delivering appropriate dose weighting using a TPS precise plan Elekta R 2.15. Results showed that this optimization a more homogeneous radiation on CTV target volume, lower dose in healthy tissue, and reduced hotspots in CTV target volume. Comparison results of field in field multi segmented MLC technique with standard conventional technique for WBRT are: higher average minimum dose (77.25% ± 0:47%) vs (60% ± 3:35%); lower average maximum dose (110.27% ± 0.26%) vs (114.53% ± 1.56%); lower hotspot volume (5.71% vs 27.43%); and lower dose on eye lenses (right eye: 9.52% vs 18.20%); (left eye: 8.60% vs 16.53%).

  6. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  7. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  8. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  9. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  10. Simultaneous minimization of leaf travel distance and tongue-and-groove effect for segmental intensity-modulated radiation therapy.

    PubMed

    Dai, Jianrong; Que, William

    2004-12-07

    This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.

  11. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  12. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  13. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  14. Design and Optimization of the SPOT Primary Mirror Segment

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  15. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  16. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  17. Open-door laminoplasty for cervical myelopathy resulting from adjacent-segment disease in patients with previous anterior cervical decompression and fusion.

    PubMed

    Matsumoto, Morio; Nojiri, Kenya; Chiba, Kazuhiro; Toyama, Yoshiaki; Fukui, Yasuyuki; Kamata, Michihiro

    2006-05-20

    This is a retrospective study of patients with cervical myelopathy resulting from adjacent-segment disease who were treated by open-door expansive laminoplasty. The purpose of this study was to evaluate the effectiveness of laminoplasty for cervical myelopathy resulting from adjacent-segment disease. Adjacent-segment disease is one of the problems associated with anterior cervical decompression and fusion. However, the optimal surgical management strategy is still controversial. Thirty-one patients who underwent open-door expansive laminoplasty for cervical myelopathy resulting from adjacent-segment disease and age- and sex-matched 31 patients with myelopathy who underwent laminoplasty as the initial surgery were enrolled in the study. The pre- and postoperative Japanese Orthopedic Association scores (JOA scores) and the recovery rate were compared between the two groups. The average JOA scores in the patients with adjacent-segment disease and the controls were 9.2 +/- 2.6 and 9.4 +/- 2.3 before the expansive laminoplasty and 11.9 +/- 2.8 and 13.3 +/- 1.7 at the follow-up examination, respectively; the average recovery rates in the two groups were 37.1 +/- 22.4% and 50.0 +/- 21.3%, respectively (P = 0.04). The mean number of segments covered by the high-intensity lesions on the T2-weighted magnetic resonance images was 1.87 and 0.9, respectively (P = 0.001). Moderate neurologic recovery was obtained after open-door laminoplasty in patients with cervical myelopathy resulting from adjacent-segment disc disease, although the results were not as satisfactory as those in the control group. This may be attributed to the irreversible damage of the spinal cord caused by persistent compression at the adjacent segments.

  18. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    NASA Astrophysics Data System (ADS)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  19. A Mission-Adaptive Variable Camber Flap Control System to Optimize High Lift and Cruise Lift-to-Drag Ratios of Future N+3 Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Urnes, James, Sr.; Nguyen, Nhan; Ippolito, Corey; Totah, Joseph; Trinh, Khanh; Ting, Eric

    2013-01-01

    Boeing and NASA are conducting a joint study program to design a wing flap system that will provide mission-adaptive lift and drag performance for future transport aircraft having light-weight, flexible wings. This Variable Camber Continuous Trailing Edge Flap (VCCTEF) system offers a lighter-weight lift control system having two performance objectives: (1) an efficient high lift capability for take-off and landing, and (2) reduction in cruise drag through control of the twist shape of the flexible wing. This control system during cruise will command varying flap settings along the span of the wing in order to establish an optimum wing twist for the current gross weight and cruise flight condition, and continue to change the wing twist as the aircraft changes gross weight and cruise conditions for each mission segment. Design weight of the flap control system is being minimized through use of light-weight shape memory alloy (SMA) actuation augmented with electric actuators. The VCCTEF program is developing better lift and drag performance of flexible wing transports with the further benefits of lighter-weight actuation and less drag using the variable camber shape of the flap.

  20. Estimating the concentration of gold nanoparticles incorporated on natural rubber membranes using multi-level starlet optimal segmentation

    NASA Astrophysics Data System (ADS)

    de Siqueira, A. F.; Cabrera, F. C.; Pagamisse, A.; Job, A. E.

    2014-12-01

    This study consolidates multi-level starlet segmentation (MLSS) and multi-level starlet optimal segmentation (MLSOS) techniques for photomicrograph segmentation, based on starlet wavelet detail levels to separate areas of interest in an input image. Several segmentation levels can be obtained using MLSS; after that, Matthews correlation coefficient is used to choose an optimal segmentation level, giving rise to MLSOS. In this paper, MLSOS is employed to estimate the concentration of gold nanoparticles with diameter around 47 nm, reduced on natural rubber membranes. These samples were used for the construction of SERS/SERRS substrates and in the study of the influence of natural rubber membranes with incorporated gold nanoparticles on the physiology of Leishmania braziliensis. Precision, recall, and accuracy are used to evaluate the segmentation performance, and MLSOS presents an accuracy greater than 88 % for this application.

  1. Automatic multi-organ segmentation using learning-based segmentation and level set optimization.

    PubMed

    Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin

    2011-01-01

    We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.

  2. Optimized multisectioned acoustic liners

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1979-01-01

    New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.

  3. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  4. Feature space analysis of MRI

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Peck, Donald J.

    1997-04-01

    This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.

  5. [Plaque segmentation of intracoronary optical coherence tomography images based on K-means and improved random walk algorithm].

    PubMed

    Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian

    2017-06-01

    In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.

  6. Automated segmentation of neuroanatomical structures in multispectral MR microscopy of the mouse brain.

    PubMed

    Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan

    2005-08-15

    We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.

  7. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  8. Estimates of Optimal Operating Conditions for Hydrogen-Oxygen Cesium-Seeded Magnetohydrodynamic Power Generator

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Nichols, L. D.

    1977-01-01

    The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.

  9. Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation

    NASA Astrophysics Data System (ADS)

    Krastev, Vladimir

    2011-12-01

    We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.

  10. TH-CD-206-02: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in MR Images Using Patch-Based Anatomical Signature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Jani, A; Rossi, P

    Purpose: MRI has shown promise in identifying prostate tumors with high sensitivity and specificity for the detection of prostate cancer. Accurate segmentation of the prostate plays a key role various tasks: to accurately localize prostate boundaries for biopsy needle placement and radiotherapy, to initialize multi-modal registration algorithms or to obtain the region of interest for computer-aided detection of prostate cancer. However, manual segmentation during biopsy or radiation therapy can be time consuming and subject to inter- and intra-observer variation. This study’s purpose it to develop an automated method to address this technical challenge. Methods: We present an automated multi-atlas segmentationmore » for MR prostate segmentation using patch-based label fusion. After an initial preprocessing for all images, all the atlases are non-rigidly registered to a target image. And then, the resulting transformation is used to propagate the anatomical structure labels of the atlas into the space of the target image. The top L similar atlases are further chosen by measuring intensity and structure difference in the region of interest around prostate. Finally, using voxel weighting based on patch-based anatomical signature, the label that the majority of all warped labels predict for each voxel is used for the final segmentation of the target image. Results: This segmentation technique was validated with a clinical study of 13 patients. The accuracy of our approach was assessed using the manual segmentation (gold standard). The mean volume Dice Overlap Coefficient was 89.5±2.9% between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D MRI-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning label fusion framework, demonstrated its clinical feasibility, and validated its accuracy. This segmentation technique could be a useful tool in image-guided interventions for prostate-cancer diagnosis and treatment.« less

  11. Discriminative clustering on manifold for adaptive transductive classification.

    PubMed

    Zhang, Zhao; Jia, Lei; Zhang, Min; Li, Bing; Zhang, Li; Li, Fanzhang

    2017-10-01

    In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  13. Advanced and standardized evaluation of neurovascular compression syndromes

    NASA Astrophysics Data System (ADS)

    Hastreiter, Peter; Vega Higuera, Fernando; Tomandl, Bernd; Fahlbusch, Rudolf; Naraghi, Ramin

    2004-05-01

    Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.

  14. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images

    PubMed Central

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315

  15. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  16. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  17. Optimization-based interactive segmentation interface for multiregion problems

    PubMed Central

    Baxter, John S. H.; Rajchl, Martin; Peters, Terry M.; Chen, Elvis C. S.

    2016-01-01

    Abstract. Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality. PMID:27335892

  18. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  19. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  20. Energy flow during Olympic weight lifting.

    PubMed

    Garhammer, J

    1982-01-01

    Data obtained from 16-mm film of world caliber Olympic weight lifters performing at major competitions were analyzed to study energy changes during body segment and barbell movements, energy transfer to the barbell, and energy transfer between segments during the lifting movements contested. Determination of barbell and body segment kinematics and use of rigid-link modeling and energy flow techniques permitted the calculation of segment energy content and energy transfer between segments. Energy generation within and transfer to and from segments were determined at 0.04-s intervals by comparing mechanical energy changes of a segment with energy transfer at the joints, calculated from the scalar product of net joint force with absolute joint velocity, and the product of net joint torque due to muscular activity with absolute segment angular velocity. The results provided a detailed understanding of the magnitude and temporal input of energy from dominant muscle groups during a lift. This information also provided a means of quantifying lifting technique. Comparison of segment energy changes determined by the two methods were satisfactory but could likely be improved by employing more sophisticated data smoothing methods. The procedures used in this study could easily be applied to weight training and rehabilitative exercises to help determine their efficacy in producing desired results or to ergonomic situations where a more detailed understanding of the demands made on the body during lifting tasks would be useful.

  1. Minimum weight design of rectangular and tapered helicopter rotor blades with frequency constraints

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Walsh, Joanne L.

    1988-01-01

    The minimum weight design of a helicopter rotor blade subject to constraints on coupled flap-lag natural frequencies has been studied. A constraint has also been imposed on the minimum value of the autorotational inertia of the blade in order to ensure that it has sufficient inertia to autorotate in the case of engine failure. The program CAMRAD is used for the blade modal analysis and CONMIN is used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for both rectangular and tapered blades. Design variables include taper ratio, segment weights, and box beam dimensions. It is shown that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for both rectangular and tapered blades.

  2. Minimum weight design of rectangular and tapered helicopter rotor blades with frequency constraints

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Walsh, Joanne L.

    1988-01-01

    The minimum weight design of a helicopter rotor blade subject to constraints on coupled flap-lag natural frequencies has been studied. A constraint has also been imposed on the minimum value of the autorotational inertia of the blade in order to ensure that it has sufficient inertia to aurorotate in the case of engine failure. The program CAMRAD is used for the blade modal analysis and CONMIN is used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for both rectangular and tapered blades. Design variables include taper ratio, segment weights, and box beam dimensions. It is shown that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for both rectangular and tapered blades.

  3. Minimum weight design of helicopter rotor blades with frequency constraints

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Walsh, Joanne L.

    1989-01-01

    The minimum weight design of helicopter rotor blades subject to constraints on fundamental coupled flap-lag natural frequencies has been studied in this paper. A constraint has also been imposed on the minimum value of the blade autorotational inertia to ensure that the blade has sufficient inertia to autorotate in case of an engine failure. The program CAMRAD has been used for the blade modal analysis and the program CONMIN has been used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for blades in vacuum with both rectangular and tapered box beam structures. Design variables include taper ratio, nonstructural segment weights and box beam dimensions. The paper shows that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for blades with rectangular and tapered box beams.

  4. Segmented media and medium damping in microwave assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Bai, Xiaoyu; Zhu, Jian-Gang

    2018-05-01

    In this paper, we present a methodology of segmented media stack design for microwave assisted magnetic recording. Through micro-magnetic modeling, it is demonstrated that an optimized media segmentation is able to yield high signal-to-noise ratio even with limited ac field power. With proper segmentation, the ac field power could be utilized more efficiently and this can alleviate the requirement for medium damping which has been previously considered a critical limitation. The micro-magnetic modeling also shows that with segmentation optimization, recording signal-to-noise ratio can have very little dependence on damping for different recording linear densities.

  5. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  6. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  7. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  8. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets

    PubMed Central

    Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.

    2016-01-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  9. Variable dose rate single-arc IMAT delivered with a constant dose rate and variable angular spacing

    NASA Astrophysics Data System (ADS)

    Tang, Grace; Earl, Matthew A.; Yu, Cedric X.

    2009-11-01

    Single-arc intensity-modulated arc therapy (IMAT) has gained worldwide interest in both research and clinical implementation due to its superior plan quality and delivery efficiency. Single-arc IMAT techniques such as the Varian RapidArc™ deliver conformal dose distributions to the target in one single gantry rotation, resulting in a delivery time in the order of 2 min. The segments in these techniques are evenly distributed within an arc and are allowed to have different monitor unit (MU) weightings. Therefore, a variable dose-rate (VDR) is required for delivery. Because the VDR requirement complicates the control hardware and software of the linear accelerators (linacs) and prevents most existing linacs from delivering IMAT, we propose an alternative planning approach for IMAT using constant dose-rate (CDR) delivery with variable angular spacing. We prove the equivalence by converting VDR-optimized RapidArc plans to CDR plans, where the evenly spaced beams in the VDR plan are redistributed to uneven spacing such that the segments with larger MU weighting occupy a greater angular interval. To minimize perturbation in the optimized dose distribution, the angular deviation of the segments was restricted to <=± 5°. This restriction requires the treatment arc to be broken into multiple sectors such that the local MU fluctuation within each sector is reduced, thereby lowering the angular deviation of the segments during redistribution. The converted CDR plans were delivered with a single gantry sweep as in the VDR plans but each sector was delivered with a different value of CDR. For four patient cases, including two head-and-neck, one brain and one prostate, all CDR plans developed with the variable spacing scheme produced similar dose distributions to the original VDR plans. For plans with complex angular MU distributions, the number of sectors increased up to four in the CDR plans in order to maintain the original plan quality. Since each sector was delivered with a different dose rate, extra mode-up time (xMOT) was needed between the transitions of the successive sectors during delivery. On average, the delivery times of the CDR plans were approximately less than 1 min longer than the treatment times of the VDR plans, with an average of about 0.33 min of xMOT per sector transition. The results have shown that VDR may not be necessary for single-arc IMAT. Using variable angular spacing, VDR RapidArc plans can be implemented into the clinics that are not equipped with the new VDR-enabled machines without compromising the plan quality or treatment efficiency. With a prospective optimization approach using variable angular spacing, CDR delivery times can be further minimized while maintaining the high delivery efficiency of single-arc IMAT treatment.

  10. Intensity modulated neutron radiotherapy optimization by photon proxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Michael; Hammoud, Ahmad; Bossenberger, Todd

    2012-08-15

    Purpose: Introducing intensity modulation into neutron radiotherapy (IMNRT) planning has the potential to mitigate some normal tissue complications seen in past neutron trials. While the hardware to deliver IMNRT plans has been in use for several years, until recently the IMNRT planning process has been cumbersome and of lower fidelity than conventional photon plans. Our in-house planning system used to calculate neutron therapy plans allows beam weight optimization of forward planned segments, but does not provide inverse optimization capabilities. Commercial treatment planning systems provide inverse optimization capabilities, but currently cannot model our neutron beam. Methods: We have developed a methodologymore » and software suite to make use of the robust optimization in our commercial planning system while still using our in-house planning system to calculate final neutron dose distributions. Optimized multileaf collimator (MLC) leaf positions for segments designed in the commercial system using a 4 MV photon proxy beam are translated into static neutron ports that can be represented within our in-house treatment planning system. The true neutron dose distribution is calculated in the in-house system and then exported back through the MATLAB software into the commercial treatment planning system for evaluation. Results: The planning process produces optimized IMNRT plans that reduce dose to normal tissue structures as compared to 3D conformal plans using static MLC apertures. The process involves standard planning techniques using a commercially available treatment planning system, and is not significantly more complex than conventional IMRT planning. Using a photon proxy in a commercial optimization algorithm produces IMNRT plans that are more conformal than those previously designed at our center and take much less time to create. Conclusions: The planning process presented here allows for the optimization of IMNRT plans by a commercial treatment planning optimization algorithm, potentially allowing IMNRT to achieve similar conformality in treatment as photon IMRT. The only remaining requirements for the delivery of very highly modulated neutron treatments are incremental improvements upon already implemented hardware systems that should be readily achievable.« less

  11. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  12. Automatic segmentation of brain hemispheres by midplane detection in class images

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    Segmentation of brain hemispheres is necessary to study left- right differences in structure and function. For extraction of a 3D individual region-of-interest atlas of the human brain, detection of the midplane is the sine qua non as it provides the reference plane for determining other anatomical objects. Extraction of the sagittal midplane is done in two main steps. First, a 2D filter is used to give a first approximation of the midplane position. To model symmetry properties of the midplane neighborhood, the different filter columns contain class-dependent weights for cerebrospinal fluid, gray and white matter. The filter can be rotated in a range of angles. In a user-defined range of planes, the global maximum of the filter response is searched for and the resulting position is utilized to restrict the search in the remaining planes. In a second step, midplane extraction is refined by searching for the optimal path of the midplane within the filter mask at optimum position. Symmetry properties are modeled analogous to the first step with class-dependent weights of the filter columns. The extraction of the midplane gives accurate and reliable results in simulated data sets and patient studies even if asymmetric artifacts are simulated.

  13. Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Zhang, X.; Anderson, D. D.; Brown, T. D.; Hofwegen, C. Van; Sonka, M.

    2009-02-01

    We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object segmentation problems.

  14. Acoustic design of boundary segments in aircraft fuselages using topology optimization and a specialized acoustic pressure function

    NASA Astrophysics Data System (ADS)

    Radestock, Martin; Rose, Michael; Monner, Hans Peter

    2017-04-01

    In most aviation applications, a major cost benefit can be achieved by a reduction of the system weight. Often the acoustic properties of the fuselage structure are not in the focus of the primary design process, too. A final correction of poor acoustic properties is usually done using insulation mats in the chamber between the primary and secondary shell. It is plausible that a more sophisticated material distribution in that area can result in a substantially reduced weight. Topology optimization is a well-known approach to reduce material of compliant structures. In this paper an adaption of this method to acoustic problems is investigated. The gap full of insulation mats is suitably parameterized to achieve different material distributions. To find advantageous configurations, the objective in the underlying topology optimization is chosen to obtain good acoustic pressure patterns in the aircraft cabin. An important task in the optimization is an adequate Finite Element model of the system. This can usually not be obtained from commercially available programs due to the lack of special sensitivity data with respect to the design parameters. Therefore an appropriate implementation of the algorithm has been done, exploiting the vector and matrix capabilities in the MATLABQ environment. Finally some new aspects of the Finite Element implementation will also be presented, since they are interesting on its own and can be generalized to efficiently solve other partial differential equations as well.

  15. Optimization lighting layout based on gene density improved genetic algorithm for indoor visible light communications

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Wang, Xin; Chen, Yong; Kong, Deqian; Xia, Peijie

    2017-05-01

    For indoor visible light communication system, the layout of LED lamps affects the uniformity of the received power on communication plane. In order to find an optimized lighting layout that meets both the lighting needs and communication needs, a gene density genetic algorithm (GDGA) is proposed. In GDGA, a gene indicates a pair of abscissa and ordinate of a LED, and an individual represents a LED layout in the room. The segmented crossover operation and gene mutation strategy based on gene density are put forward to make the received power on communication plane more uniform and increase the population's diversity. A weighted differences function between individuals is designed as the fitness function of GDGA for reserving the population having the useful LED layout genetic information and ensuring the global convergence of GDGA. Comparing square layout and circular layout, with the optimized layout achieved by the GDGA, the power uniformity increases by 83.3%, 83.1% and 55.4%, respectively. Furthermore, the convergence of GDGA is verified compared with evolutionary algorithm (EA). Experimental results show that GDGA can quickly find an approximation of optimal layout.

  16. Ultra-high field upper extremity peripheral nerve and non-contrast enhanced vascular imaging

    PubMed Central

    Raval, Shailesh B.; Britton, Cynthia A.; Zhao, Tiejun; Krishnamurthy, Narayanan; Santini, Tales; Gorantla, Vijay S.; Ibrahim, Tamer S.

    2017-01-01

    Objective The purpose of this study was to explore the efficacy of Ultra-high field [UHF] 7 Tesla [T] MRI as compared to 3T MRI in non-contrast enhanced [nCE] imaging of structural anatomy in the elbow, forearm, and hand [upper extremity]. Materials and method A wide range of sequences including T1 weighted [T1] volumetric interpolate breath-hold exam [VIBE], T2 weighted [T2] double-echo steady state [DESS], susceptibility weighted imaging [SWI], time-of-flight [TOF], diffusion tensor imaging [DTI], and diffusion spectrum imaging [DSI] were optimized and incorporated with a radiofrequency [RF] coil system composed of a transverse electromagnetic [TEM] transmit coil combined with an 8-channel receive-only array for 7T upper extremity [UE] imaging. In addition, Siemens optimized protocol/sequences were used on a 3T scanner and the resulting images from T1 VIBE and T2 DESS were compared to that obtained at 7T qualitatively and quantitatively [SWI was only qualitatively compared]. DSI studio was utilized to identify nerves based on analysis of diffusion weighted derived fractional anisotropy images. Images of forearm vasculature were extracted using a paint grow manual segmentation method based on MIPAV [Medical Image Processing, Analysis, and Visualization]. Results High resolution and high quality signal-to-noise ratio [SNR] and contrast-to-noise ratio [CNR]—images of the hand, forearm, and elbow were acquired with nearly homogeneous 7T excitation. Measured [performed on the T1 VIBE and T2 DESS sequences] SNR and CNR values were almost doubled at 7T vs. 3T. Cartilage, synovial fluid and tendon structures could be seen with higher clarity in the 7T T1 and T2 weighted images. SWI allowed high resolution and better quality imaging of large and medium sized arteries and veins, capillary networks and arteriovenous anastomoses at 7T when compared to 3T. 7T diffusion weighted sequence [not performed at 3T] demonstrates that the forearm nerves are clearly delineated by fiber tractography. The proper digital palmar arteries and superficial palmar arch could also be clearly visualized using TOF nCE 7T MRI. Conclusion Ultra-high resolution neurovascular imaging in upper extremities is possible at 7T without use of renal toxic intravenous contrast. 7T MRI can provide superior peripheral nerve [based on fiber anisotropy and diffusion coefficient parameters derived from diffusion tensor/spectrum imaging] and vascular [nCE MRA and vessel segmentation] imaging. PMID:28662061

  17. Supply chain optimization: a practitioner's perspective on the next logistics breakthrough.

    PubMed

    Schlegel, G L

    2000-08-01

    The objective of this paper is to profile a practitioner's perspective on supply chain optimization and highlight the critical elements of this potential new logistics breakthrough idea. The introduction will briefly describe the existing distribution network, and business environment. This will include operational statistics, manufacturing software, and hardware configurations. The first segment will cover the critical success factors or foundations elements that are prerequisites for success. The second segment will give you a glimpse of a "working game plan" for successful migration to supply chain optimization. The final segment will briefly profile "bottom-line" benefits to be derived from the use of supply chain optimization as a strategy, tactical tool, and competitive advantage.

  18. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  19. Brominated flame retardant (BFRs) and Dechlorane Plus (DP) in paired human serum and segmented hair.

    PubMed

    Qiao, Lin; Zheng, Xiao-Bo; Yan, Xiao; Wang, Mei-Huang; Zheng, Jing; Chen, She-Jun; Yang, Zhong-Yi; Mai, Bi-Xian

    2018-01-01

    Brominated flame retardants (BFRs) and Dechlorane Plus (DP) were measured in both human hair and paired serum samples from a cohort of university students in South China. Segmental analysis was conducted to explore gender difference and the relationships between the hair and serum. The concentrations of total PBDEs in the hair and serum samples were in a range of 0.28-34.1ng/g dry weight (dw) and 0.16-156ng/g lipid weight (lw), respectively. Concentrations of ∑DPs (sum of the syn-DP and anti-DP isomers) in all hair samples ranged from nd-5.45ng/g dry weight. Concentrations of most PBDEs and decabromodiphenylethane (DBDPE) in distal segments (5-10cm from the scalp) were higher than those in the proximal segments (0-5cm from the scalp) (t-test, p < 0.05), which could be due to the longer exposure time of distal segments. The proximal segments exhibited a unique congener profile, more close to that in the serum rather than the distal segments of hair. An obvious gender difference was found in the levels of ∑PBDEs using integrated hair samples, while the difference disappeared when considering alone the proximal segments of hair (0-5cm from scalp) for both genders. This paper provides supplement to the current knowledge on sources of BFRs and DPs in hair and declares the importance of segmental analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Implementation and assessment of diffusion-weighted partial Fourier readout-segmented echo-planar imaging.

    PubMed

    Frost, Robert; Porter, David A; Miller, Karla L; Jezzard, Peter

    2012-08-01

    Single-shot echo-planar imaging has been used widely in diffusion magnetic resonance imaging due to the difficulties in correcting motion-induced phase corruption in multishot data. Readout-segmented EPI has addressed the multishot problem by introducing a two-dimensional nonlinear navigator correction with online reacquisition of uncorrectable data to enable acquisition of high-resolution diffusion data with reduced susceptibility artifact and T*(2) blurring. The primary shortcoming of readout-segmented EPI in its current form is its long acquisition time (longer than similar resolution single-shot echo-planar imaging protocols by approximately the number of readout segments), which limits the number of diffusion directions. By omitting readout segments at one side of k-space and using partial Fourier reconstruction, readout-segmented EPI imaging times could be reduced. In this study, the effects of homodyne and projection onto convex sets reconstructions on estimates of the fractional anisotropy, mean diffusivity, and diffusion orientation in fiber tracts and raw T(2)- and trace-weighted signal are compared, along with signal-to-noise ratio results. It is found that projections onto convex sets reconstruction with 3/5 segments in a 2 mm isotropic diffusion tensor image acquisition and 9/13 segments in a 0.9 × 0.9 × 4.0 mm(3) diffusion-weighted image acquisition provide good fidelity relative to the full k-space parameters. This allows application of readout-segmented EPI to tractography studies, and clinical stroke and oncology protocols. Copyright © 2011 Wiley-Liss, Inc.

  1. Tissue classification and segmentation of pressure injuries using convolutional neural networks.

    PubMed

    Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel

    2018-06-01

    This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Robust tissue-air volume segmentation of MR images based on the statistics of phase and magnitude: Its applications in the display of susceptibility-weighted imaging of the brain.

    PubMed

    Du, Yiping P; Jin, Zhaoyang

    2009-10-01

    To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.

  3. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    PubMed

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  5. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  6. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Segmentation and texture analysis of structural biomarkers using neighborhood-clustering-based level set in MRI of the schizophrenic brain.

    PubMed

    Latha, Manohar; Kavitha, Ganesan

    2018-02-03

    Schizophrenia (SZ) is a psychiatric disorder that especially affects individuals during their adolescence. There is a need to study the subanatomical regions of SZ brain on magnetic resonance images (MRI) based on morphometry. In this work, an attempt was made to analyze alterations in structure and texture patterns in images of the SZ brain using the level-set method and Laws texture features. T1-weighted MRI of the brain from Center of Biomedical Research Excellence (COBRE) database were considered for analysis. Segmentation was carried out using the level-set method. Geometrical and Laws texture features were extracted from the segmented brain stem, corpus callosum, cerebellum, and ventricle regions to analyze pattern changes in SZ. The level-set method segmented multiple brain regions, with higher similarity and correlation values compared with an optimized method. The geometric features obtained from regions of the corpus callosum and ventricle showed significant variation (p < 0.00001) between normal and SZ brain. Laws texture feature identified a heterogeneous appearance in the brain stem, corpus callosum and ventricular regions, and features from the brain stem were correlated with Positive and Negative Syndrome Scale (PANSS) score (p < 0.005). A framework of geometric and Laws texture features obtained from brain subregions can be used as a supplement for diagnosis of psychiatric disorders.

  8. Fast automated segmentation of multiple objects via spatially weighted shape learning

    NASA Astrophysics Data System (ADS)

    Chandra, Shekhar S.; Dowling, Jason A.; Greer, Peter B.; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart

    2016-11-01

    Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice’s similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.

  9. Fast automated segmentation of multiple objects via spatially weighted shape learning.

    PubMed

    Chandra, Shekhar S; Dowling, Jason A; Greer, Peter B; Martin, Jarad; Wratten, Chris; Pichler, Peter; Fripp, Jurgen; Crozier, Stuart

    2016-11-21

    Active shape models (ASMs) have proved successful in automatic segmentation by using shape and appearance priors in a number of areas such as prostate segmentation, where accurate contouring is important in treatment planning for prostate cancer. The ASM approach however, is heavily reliant on a good initialisation for achieving high segmentation quality. This initialisation often requires algorithms with high computational complexity, such as three dimensional (3D) image registration. In this work, we present a fast, self-initialised ASM approach that simultaneously fits multiple objects hierarchically controlled by spatially weighted shape learning. Prominent objects are targeted initially and spatial weights are progressively adjusted so that the next (more difficult, less visible) object is simultaneously initialised using a series of weighted shape models. The scheme was validated and compared to a multi-atlas approach on 3D magnetic resonance (MR) images of 38 cancer patients and had the same (mean, median, inter-rater) Dice's similarity coefficients of (0.79, 0.81, 0.85), while having no registration error and a computational time of 12-15 min, nearly an order of magnitude faster than the multi-atlas approach.

  10. Shape complexes: the intersection of label orderings and star convexity constraints in continuous max-flow medical image segmentation

    PubMed Central

    Baxter, John S. H.; Inoue, Jiro; Drangova, Maria; Peters, Terry M.

    2016-01-01

    Abstract. Optimization-based segmentation approaches deriving from discrete graph-cuts and continuous max-flow have become increasingly nuanced, allowing for topological and geometric constraints on the resulting segmentation while retaining global optimality. However, these two considerations, topological and geometric, have yet to be combined in a unified manner. The concept of “shape complexes,” which combine geodesic star convexity with extendable continuous max-flow solvers, is presented. These shape complexes allow more complicated shapes to be created through the use of multiple labels and super-labels, with geodesic star convexity governed by a topological ordering. These problems can be optimized using extendable continuous max-flow solvers. Previous approaches required computationally expensive coordinate system warping, which are ill-defined and ambiguous in the general case. These shape complexes are demonstrated in a set of synthetic images as well as vessel segmentation in ultrasound, valve segmentation in ultrasound, and atrial wall segmentation from contrast-enhanced CT. Shape complexes represent an extendable tool alongside other continuous max-flow methods that may be suitable for a wide range of medical image segmentation problems. PMID:28018937

  11. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    PubMed Central

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-01-01

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430

  12. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    PubMed

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  13. Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.

    2005-04-01

    Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.

  14. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  15. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  16. Semi-automatic segmentation of myocardium at risk in T2-weighted cardiovascular magnetic resonance.

    PubMed

    Sjögren, Jane; Ubachs, Joey F A; Engblom, Henrik; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2012-01-31

    T2-weighted cardiovascular magnetic resonance (CMR) has been shown to be a promising technique for determination of ischemic myocardium, referred to as myocardium at risk (MaR), after an acute coronary event. Quantification of MaR in T2-weighted CMR has been proposed to be performed by manual delineation or the threshold methods of two standard deviations from remote (2SD), full width half maximum intensity (FWHM) or Otsu. However, manual delineation is subjective and threshold methods have inherent limitations related to threshold definition and lack of a priori information about cardiac anatomy and physiology. Therefore, the aim of this study was to develop an automatic segmentation algorithm for quantification of MaR using anatomical a priori information. Forty-seven patients with first-time acute ST-elevation myocardial infarction underwent T2-weighted CMR within 1 week after admission. Endocardial and epicardial borders of the left ventricle, as well as the hyper enhanced MaR regions were manually delineated by experienced observers and used as reference method. A new automatic segmentation algorithm, called Segment MaR, defines the MaR region as the continuous region most probable of being MaR, by estimating the intensities of normal myocardium and MaR with an expectation maximization algorithm and restricting the MaR region by an a priori model of the maximal extent for the user defined culprit artery. The segmentation by Segment MaR was compared against inter observer variability of manual delineation and the threshold methods of 2SD, FWHM and Otsu. MaR was 32.9 ± 10.9% of left ventricular mass (LVM) when assessed by the reference observer and 31.0 ± 8.8% of LVM assessed by Segment MaR. The bias and correlation was, -1.9 ± 6.4% of LVM, R = 0.81 (p < 0.001) for Segment MaR, -2.3 ± 4.9%, R = 0.91 (p < 0.001) for inter observer variability of manual delineation, -7.7 ± 11.4%, R = 0.38 (p = 0.008) for 2SD, -21.0 ± 9.9%, R = 0.41 (p = 0.004) for FWHM, and 5.3 ± 9.6%, R = 0.47 (p < 0.001) for Otsu. There is a good agreement between automatic Segment MaR and manually assessed MaR in T2-weighted CMR. Thus, the proposed algorithm seems to be a promising, objective method for standardized MaR quantification in T2-weighted CMR.

  17. Hippocampus segmentation using locally weighted prior based level set

    NASA Astrophysics Data System (ADS)

    Achuthan, Anusha; Rajeswari, Mandava

    2015-12-01

    Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.

  18. A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation

    PubMed Central

    Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen

    2013-01-01

    In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218

  19. An adipose segmentation and quantification scheme for the intra abdominal region on minipigs

    NASA Astrophysics Data System (ADS)

    Engholm, Rasmus; Dubinskiy, Aleksandr; Larsen, Rasmus; Hanson, Lars G.; Christoffersen, Berit Østergaard

    2006-03-01

    This article describes a method for automatic segmentation of the abdomen into three anatomical regions: subcutaneous, retroperitoneal and visceral. For the last two regions the amount of adipose tissue (fat) is quantified. According to recent medical research, the distinction between retroperitoneal and visceral fat is important for studying metabolic syndrome, which is closely related to diabetes. However previous work has neglected to address this point, treating the two types of fat together. We use T1-weighted three-dimensional magnetic resonance data of the abdomen of obese minipigs. The pigs were manually dissected right after the scan, to produce the "ground truth" segmentation. We perform automatic segmentation on a representative slice, which on humans has been shown to correlate with the amount of adipose tissue in the abdomen. The process of automatic fat estimation consists of three steps. First, the subcutaneous fat is removed with a modified active contour approach. The energy formulation of the active contour exploits the homogeneous nature of the subcutaneous fat and the smoothness of the boundary. Subsequently the retroperitoneal fat located around the abdominal cavity is separated from the visceral fat. For this, we formulate a cost function on a contour, based on intensities, edges, distance to center and smoothness, so as to exploit the properties of the retroperitoneal fat. We then globally optimize this function using dynamic programming. Finally, the fat content of the retroperitoneal and visceral regions is quantified based on a fuzzy c-means clustering of the intensities within the segmented regions. The segmentation proved satisfactory by visual inspection, and closely correlated with the manual dissection data. The correlation was 0.89 for the retroperitoneal fat, and 0.74 for the visceral fat.

  20. Effect of Epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study

    PubMed Central

    Harkema, Susan; Gerasimenko, Yury; Hodes, Jonathan; Burdick, Joel; Angeli, Claudia; Chen, Yangsheng; Ferreira, Christie; Willhite, Andrea; Rejc, Enrico; Grossman, Robert G.; Edgerton, V. Reggie

    2011-01-01

    Summary Background Repeated periods of stimulation of the spinal cord and training seems to have amplified the ability to consciously control movement. Methods An individual three years post C7-T1 subluxation presented with a complete loss of clinically detectable voluntary motor function and partial preservation of sensation below the T1 cord segment. Following 170 locomotor training sessions, a 16-electrode array was surgically placed on the dura (L1-S1 cord segments) to allow for chronic electrical stimulation. After implantation and throughout stand retraining with epidural stimulation, 29 experiments were performed. Extensive stimulation combinations and parameters were tested to achieve standing and stepping. Findings Epidural stimulation enabled the human lumbosacral spinal circuitry to dynamically elicit full weight-bearing standing with assistance provided only for balance for 4·25 minutes in a subject with a clinically motor complete SCI. This occurred when using stimulation at parameters optimized for standing while providing bilateral load-bearing proprioceptive input. Locomotor-like patterns were also observed when stimulation parameters were optimized for stepping. In addition, seven months after implantation, the subject recovered supraspinal control of certain leg movements, but only during epidural stimulation. Interpretation Even after a severe low cervical spinal injury, the neural networks remaining within the lumbosacral segments can be reactivated into functional states so that it can recognize specific details of ensembles of sensory input to the extent that it can serve as the source of neural control. In addition, newly formed supraspinal input to this same lumbosacral segments can re-emerge as another source of control. Task specific training with epidural stimulation may have reactivated previously silent spared neural circuits or promoted plasticity. This suggests that these interventions could be a viable clinical approach for functional recovery after severe paralysis. Funding National Institutes of Health and Christopher and Dana Reeve Foundation. PMID:21601270

  1. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  2. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less

  3. Learning-Based Object Identification and Segmentation Using Dual-Energy CT Images for Security.

    PubMed

    Martin, Limor; Tuysuzoglu, Ahmet; Karl, W Clem; Ishwar, Prakash

    2015-11-01

    In recent years, baggage screening at airports has included the use of dual-energy X-ray computed tomography (DECT), an advanced technology for nondestructive evaluation. The main challenge remains to reliably find and identify threat objects in the bag from DECT data. This task is particularly hard due to the wide variety of objects, the high clutter, and the presence of metal, which causes streaks and shading in the scanner images. Image noise and artifacts are generally much more severe than in medical CT and can lead to splitting of objects and inaccurate object labeling. The conventional approach performs object segmentation and material identification in two decoupled processes. Dual-energy information is typically not used for the segmentation, and object localization is not explicitly used to stabilize the material parameter estimates. We propose a novel learning-based framework for joint segmentation and identification of objects directly from volumetric DECT images, which is robust to streaks, noise and variability due to clutter. We focus on segmenting and identifying a small set of objects of interest with characteristics that are learned from training images, and consider everything else as background. We include data weighting to mitigate metal artifacts and incorporate an object boundary field to reduce object splitting. The overall formulation is posed as a multilabel discrete optimization problem and solved using an efficient graph-cut algorithm. We test the method on real data and show its potential for producing accurate labels of the objects of interest without splits in the presence of metal and clutter.

  4. LOGISMOS—Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces: Cartilage Segmentation in the Knee Joint

    PubMed Central

    Zhang, Xiangmin; Williams, Rachel; Wu, Xiaodong; Anderson, Donald D.; Sonka, Milan

    2011-01-01

    A novel method for simultaneous segmentation of multiple interacting surfaces belonging to multiple interacting objects, called LOGISMOS (layered optimal graph image segmentation of multiple objects and surfaces), is reported. The approach is based on the algorithmic incorporation of multiple spatial inter-relationships in a single n-dimensional graph, followed by graph optimization that yields a globally optimal solution. The LOGISMOS method’s utility and performance are demonstrated on a bone and cartilage segmentation task in the human knee joint. Although trained on only a relatively small number of nine example images, this system achieved good performance. Judged by dice similarity coefficients (DSC) using a leave-one-out test, DSC values of 0.84 ± 0.04, 0.80 ± 0.04 and 0.80 ± 0.04 were obtained for the femoral, tibial, and patellar cartilage regions, respectively. These are excellent DSC values, considering the narrow-sheet character of the cartilage regions. Similarly, low signed mean cartilage thickness errors were obtained when compared to a manually-traced independent standard in 60 randomly selected 3-D MR image datasets from the Osteoarthritis Initiative database—0.11 ± 0.24, 0.05 ± 0.23, and 0.03 ± 0.17 mm for the femoral, tibial, and patellar cartilage thickness, respectively. The average signed surface positioning errors for the six detected surfaces ranged from 0.04 ± 0.12 mm to 0.16 ± 0.22 mm. The reported LOGISMOS framework provides robust and accurate segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multiobject multisurface segmentation problems. PMID:20643602

  5. The effect of segmental weight of prosthesis on hemodynamic responses and energy expenditure of lower extremity amputees

    PubMed Central

    Mutlu, Akmer; Kharooty, Mohammad Dawood; Yakut, Yavuz

    2017-01-01

    [Purpose] The aim of this study was to investigate the effect of segmental weight of the prosthesis on hemodynamic responses and energy expenditure in lower extremity amputees. [Subjects and Methods] Thirteen patients with a mean age of 44 ± 15.84 years and with unilateral transtibial, transfemoral and Syme’s amputation were included to the study. The difference between the lightest and the heaviest prosthesis, 250 g used as the weight. All the patients completed the measurements first without weight and then with 250 g weight on the ankle joint. The blood pressure and heart rate of the patients were recorded before and after Six Minute Walk Test (6MWT) and 10 stairs up & down stairs test. Physiological Cost Index was used to calculate the energy expenditure. [Results] Heart rate and energy expenditure increased significantly when without weight and with weight results compared. [Conclusion] We conclude that the segmental weight of the prosthetic limb has a significant effect on the heart rate and energy expenditure but has no effect on the systolic and diastolic blood pressure of lower limb amputees. In order to generalize our results to lower limb amputees, more patients need to be included in future studies. PMID:28533599

  6. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  7. Non-contrast T1-mapping detects acute myocardial edema with high diagnostic accuracy: a comparison to T2-weighted cardiovascular magnetic resonance

    PubMed Central

    2012-01-01

    Background T2w-CMR is used widely to assess myocardial edema. Quantitative T1-mapping is also sensitive to changes in free water content. We hypothesized that T1-mapping would have a higher diagnostic performance in detecting acute edema than dark-blood and bright-blood T2w-CMR. Methods We investigated 21 controls (55 ± 13 years) and 21 patients (61 ± 10 years) with Takotsubo cardiomyopathy or acute regional myocardial edema without infarction. CMR performed within 7 days included cine, T1-mapping using ShMOLLI, dark-blood T2-STIR, bright-blood ACUT2E and LGE imaging. We analyzed wall motion, myocardial T1 values and T2 signal intensity (SI) ratio relative to both skeletal muscle and remote myocardium. Results All patients had acute cardiac symptoms, increased Troponin I (0.15-36.80 ug/L) and acute wall motion abnormalities but no LGE. T1 was increased in patient segments with abnormal and normal wall motion compared to controls (1113 ± 94 ms, 1029 ± 59 ms and 944 ± 17 ms, respectively; p < 0.001). T2 SI ratio using STIR and ACUT2E was also increased in patient segments with abnormal and normal wall motion compared to controls (all p < 0.02). Receiver operator characteristics analysis showed that T1-mapping had a significantly larger area-under-the-curve (AUC = 0.94) compared to T2-weighted methods, whether the reference ROI was skeletal muscle or remote myocardium (AUC = 0.58-0.89; p < 0.03). A T1 value of greater than 990 ms most optimally differentiated segments affected by edema from normal segments at 1.5 T, with a sensitivity and specificity of 92 %. Conclusions Non-contrast T1-mapping using ShMOLLI is a novel method for objectively detecting myocardial edema with a high diagnostic performance. T1-mapping may serve as a complementary technique to T2-weighted imaging for assessing myocardial edema in ischemic and non-ischemic heart disease, such as quantifying area-at-risk and diagnosing myocarditis. PMID:22720998

  8. Comparison of cervical spine kinematics using a fluoroscopic model for adjacent segment degeneration. Invited submission from the Joint Section on Disorders of the Spine and Peripheral Nerves, March 2007.

    PubMed

    Cheng, Joseph S; Liu, Fei; Komistek, Richard D; Mahfouz, Mohamed R; Sharma, Adrija; Glaser, Diana

    2007-11-01

    In this cervical spine kinematics study the authors evaluate the motions and forces in the normal, degenerative, and fused states to assess how alteration in the cervical motion segment affects adjacent segment degeneration and spondylosis. Fluoroscopic images obtained in 30 individuals (10 in each group with disease at C5-6) undergoing flexion/extension motions were collected. Kinematic data were obtained from the fluoroscopic images and analyzed with an inverse dynamic mathematical model of the cervical spine that was developed for this analysis. During 20 degrees flexion to 15 degrees extension, average relative angles at the adjacent levels of C6-7 and C4-5 in the fused patients were 13.4 degrees and 8.8 degrees versus 3.7 degrees and 4.8 degrees in the healthy individuals. Differences at C3-4 averaged only about 1 degrees. Maximum transverse forces in the fused spines were two times the skull weight at C6-7 and one times the skull weight at C4-5, compared with 0.2 times the skull weight and 0.3 times the skull weight in the healthy individuals. Vertical forces ranged from 1.6 to 2.6 times the skull weight at C6-7 and from 1.2 to 2.5 times the skull weight at C4-5 in the patients who had undergone fusion, and from 1.4 to 3.1 times the skull weight and from 0.9 to 3.3 times the skull weight, respectively, in the volunteers. Adjacent-segment degeneration may occur in patients with fusion due to increased motions and forces at both adjacent levels when compared with healthy individuals in a comparable flexion and extension range.

  9. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  10. How Does Sequence Structure Affect the Judgment of Time? Exploring a Weighted Sum of Segments Model

    ERIC Educational Resources Information Center

    Matthews, William J.

    2013-01-01

    This paper examines the judgment of segmented temporal intervals, using short tone sequences as a convenient test case. In four experiments, we investigate how the relative lengths, arrangement, and pitches of the tones in a sequence affect judgments of sequence duration, and ask whether the data can be described by a simple weighted sum of…

  11. Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.

    PubMed

    Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos

    2008-03-01

    Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data weremore » segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  13. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  14. A general framework for multicharacter segmentation and its application in recognizing multilingual Asian documents

    NASA Astrophysics Data System (ADS)

    Wen, Di; Ding, Xiaoqing

    2003-12-01

    In this paper we propose a general framework for character segmentation in complex multilingual documents, which is an endeavor to combine the traditionally separated segmentation and recognition processes into a cooperative system. The framework contains three basic steps: Dissection, Local Optimization and Global Optimization, which are designed to fuse various properties of the segmentation hypotheses hierarchically into a composite evaluation to decide the final recognition results. Experimental results show that this framework is general enough to be applied in variety of documents. A sample system based on this framework to recognize Chinese, Japanese and Korean documents and experimental performance is reported finally.

  15. Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.

    PubMed

    Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C

    2002-06-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.

  16. SU-F-T-387: A Novel Optimization Technique for Field in Field (FIF) Chestwall Radiation Therapy Using a Single Plan to Improve Delivery Safety and Treatment Planning Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabibian, A; Kim, A; Rose, J

    Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractionsmore » for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.« less

  17. Prostate segmentation in MRI using fused T2-weighted and elastography images

    NASA Astrophysics Data System (ADS)

    Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.

    2014-03-01

    Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.

  18. Efficient global fiber tracking on multi-dimensional diffusion direction maps

    NASA Astrophysics Data System (ADS)

    Klein, Jan; Köhler, Benjamin; Hahn, Horst K.

    2012-02-01

    Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.

  19. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  20. Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2001-01-01

    Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.

  1. Software for Alignment of Segments of a Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Hall, Drew P.; Howard, Richard T.; Ly, William C.; Rakoczy, John M.; Weir, John M.

    2006-01-01

    The Segment Alignment Maintenance System (SAMS) software is designed to maintain the overall focus and figure of the large segmented primary mirror of the Hobby-Eberly Telescope. This software reads measurements made by sensors attached to the segments of the primary mirror and from these measurements computes optimal control values to send to actuators that move the mirror segments.

  2. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.

    PubMed

    Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien

    2017-04-15

    The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  4. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  5. Cerebrovascular plaque segmentation using object class uncertainty snake in MR images

    NASA Astrophysics Data System (ADS)

    Das, Bipul; Saha, Punam K.; Wolf, Ronald; Song, Hee Kwon; Wright, Alexander C.; Wehrli, Felix W.

    2005-04-01

    Atherosclerotic cerebrovascular disease leads to formation of lipid-laden plaques that can form emboli when ruptured causing blockage to cerebral vessels. The clinical manifestation of this event sequence is stroke; a leading cause of disability and death. In vivo MR imaging provides detailed image of vascular architecture for the carotid artery making it suitable for analysis of morphological features. Assessing the status of carotid arteries that supplies blood to the brain is of primary interest to such investigations. Reproducible quantification of carotid artery dimensions in MR images is essential for plaque analysis. Manual segmentation being the only method presently makes it time consuming and sensitive to inter and intra observer variability. This paper presents a deformable model for lumen and vessel wall segmentation of carotid artery from MR images. The major challenges of carotid artery segmentation are (a) low signal-to-noise ratio, (b) background intensity inhomogeneity and (c) indistinct inner and/or outer vessel wall. We propose a new, effective object-class uncertainty based deformable model with additional features tailored toward this specific application. Object-class uncertainty optimally utilizes MR intensity characteristics of various anatomic entities that enable the snake to avert leakage through fuzzy boundaries. To strengthen the deformable model for this application, some other properties are attributed to it in the form of (1) fully arc-based deformation using a Gaussian model to maximally exploit vessel wall smoothness, (2) construction of a forbidden region for outer-wall segmentation to reduce interferences by prominent lumen features and (3) arc-based landmark for efficient user interaction. The algorithm has been tested upon T1- and PD- weighted images. Measures of lumen area and vessel wall area are computed from segmented data of 10 patient MR images and their accuracy and reproducibility are examined. These results correspond exceptionally well with manual segmentation completed by radiology experts. Reproducibility of the proposed method is estimated for both intra- and inter-operator studies.

  6. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  7. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  8. Topology optimization for design of segmented permanent magnet arrays with ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Lee, Jaewook; Yoon, Minho; Nomura, Tsuyoshi; Dede, Ercan M.

    2018-03-01

    This paper presents multi-material topology optimization for the co-design of permanent magnet segments and iron material. Specifically, a co-design methodology is proposed to find an optimal border of permanent magnet segments, a pattern of magnetization directions, and an iron shape. A material interpolation scheme is proposed for material property representation among air, permanent magnet, and iron materials. In this scheme, the permanent magnet strength and permeability are controlled by density design variables, and permanent magnet magnetization directions are controlled by angle design variables. In addition, a scheme to penalize intermediate magnetization direction is proposed to achieve segmented permanent magnet arrays with discrete magnetization directions. In this scheme, permanent magnet strength is controlled depending on magnetization direction, and consequently the final permanent magnet design converges into permanent magnet segments having target discrete directions. To validate the effectiveness of the proposed approach, three design examples are provided. The examples include the design of a dipole Halbach cylinder, magnetic system with arbitrarily-shaped cavity, and multi-objective problem resembling a magnetic refrigeration device.

  9. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    PubMed

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.

  10. Multi-segment detector array for hybrid reflection-mode ultrasound and optoacoustic tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Merčep, Elena; Burton, Neal C.; Deán-Ben, Xosé Luís.; Razansky, Daniel

    2017-02-01

    The complementary contrast of the optoacoustic (OA) and pulse-echo ultrasound (US) modalities makes the combined usage of these imaging technologies highly advantageous. Due to the different physical contrast mechanisms development of a detector array optimally suited for both modalities is one of the challenges to efficient implementation of a single OA-US imaging device. We demonstrate imaging performance of the first hybrid detector array whose novel design, incorporating array segments of linear and concave geometry, optimally supports image acquisition in both reflection-mode ultrasonography and optoacoustic tomography modes. Hybrid detector array has a total number of 256 elements and three segments of different geometry and variable pitch size: a central 128-element linear segment with pitch of 0.25mm, ideally suited for pulse-echo US imaging, and two external 64-elements segments with concave geometry and 0.6mm pitch optimized for OA image acquisition. Interleaved OA and US image acquisition with up to 25 fps is facilitated through a custom-made multiplexer unit. Spatial resolution of the transducer was characterized in numerical simulations and validated in phantom experiments and comprises 230 and 300 μm in the respective OA and US imaging modes. Imaging performance of the multi-segment detector array was experimentally shown in a series of imaging sessions with healthy volunteers. Employing mixed array geometries allows at the same time achieving excellent OA contrast with a large field of view, and US contrast for complementary structural features with reduced side-lobes and improved resolution. The newly designed hybrid detector array that comprises segments of linear and concave geometries optimally fulfills requirements for efficient US and OA imaging and may expand the applicability of the developed hybrid OPUS imaging technology and accelerate its clinical translation.

  11. Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.

    PubMed

    Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong

    2011-01-01

    Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.

  12. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    NASA Astrophysics Data System (ADS)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  13. Optimal Design of General Stiffened Composite Circular Cylinders for Global Buckling with Strength Constraints

    NASA Technical Reports Server (NTRS)

    Jaunky, N.; Ambur, D. R.; Knight, N. F., Jr.

    1998-01-01

    A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and strength constraints was developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory was used for the global analysis. Local buckling of skin segments were assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments were also assessed. Constraints on the axial membrane strain in the skin and stiffener segments were imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study were the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence and stiffening configuration, where stiffening configuration is a design variable that indicates the combination of axial, transverse and diagonal stiffener in the grid-stiffened cylinder. The design optimization process was adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configurations.

  14. Optimal Design of General Stiffened Composite Circular Cylinders for Global Buckling with Strength Constraints

    NASA Technical Reports Server (NTRS)

    Jaunky, Navin; Knight, Norman F., Jr.; Ambur, Damodar R.

    1998-01-01

    A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and, strength constraints is developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory is used for the global analysis. Local buckling of skin segments are assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments are also assessed. Constraints on the axial membrane strain in the skin and stiffener segments are imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study are the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence, and stiffening configuration, where herein stiffening configuration is a design variable that indicates the combination of axial, transverse, and diagonal stiffener in the grid-stiffened cylinder. The design optimization process is adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads, and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configuration.

  15. Renal cortex segmentation using optimal surface search with novel graph construction.

    PubMed

    Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie

    2011-01-01

    In this paper, we propose a novel approach to solve the renal cortex segmentation problem, which has rarely been studied. In this study, the renal cortex segmentation problem is handled as a multiple-surfaces extraction problem, which is solved using the optimal surface search method. We propose a novel graph construction scheme in the optimal surface search to better accommodate multiple surfaces. Different surface sub-graphs are constructed according to their properties, and inter-surface relationships are also modeled in the graph. The proposed method was tested on 17 clinical CT datasets. The true positive volume fraction (TPVF) and false positive volume fraction (FPVF) are 74.10% and 0.08%, respectively. The experimental results demonstrate the effectiveness of the proposed method.

  16. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  17. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  18. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  19. Segmented ceramic liner for induction furnaces

    DOEpatents

    Gorin, Andrew H.; Holcombe, Cressie E.

    1994-01-01

    A non-fibrous ceramic liner for induction furnaces is provided by vertically stackable ring-shaped liner segments made of ceramic material in a light-weight cellular form. The liner segments can each be fabricated as a single unit or from a plurality of arcuate segments joined together by an interlocking mechanism. Also, the liner segments can be formed of a single ceramic material or can be constructed of multiple concentric layers with the layers being of different ceramic materials and/or cellular forms. Thermomechanically damaged liner segments are selectively replaceable in the furnace.

  20. Segmented ceramic liner for induction furnaces

    DOEpatents

    Gorin, A.H.; Holcombe, C.E.

    1994-07-26

    A non-fibrous ceramic liner for induction furnaces is provided by vertically stackable ring-shaped liner segments made of ceramic material in a light-weight cellular form. The liner segments can each be fabricated as a single unit or from a plurality of arcuate segments joined together by an interlocking mechanism. Also, the liner segments can be formed of a single ceramic material or can be constructed of multiple concentric layers with the layers being of different ceramic materials and/or cellular forms. Thermomechanically damaged liner segments are selectively replaceable in the furnace. 5 figs.

  1. Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET-MRI-guided radiotherapy treatment planning.

    PubMed

    Arabi, Hossein; Koutsouvelis, Nikolaos; Rouzaud, Michel; Miralbell, Raymond; Zaidi, Habib

    2016-09-07

    Magnetic resonance imaging (MRI)-guided attenuation correction (AC) of positron emission tomography (PET) data and/or radiation therapy (RT) treatment planning is challenged by the lack of a direct link between MRI voxel intensities and electron density. Therefore, even if this is not a trivial task, a pseudo-computed tomography (CT) image must be predicted from MRI alone. In this work, we propose a two-step (segmentation and fusion) atlas-based algorithm focusing on bone tissue identification to create a pseudo-CT image from conventional MRI sequences and evaluate its performance against the conventional MRI segmentation technique and a recently proposed multi-atlas approach. The clinical studies consisted of pelvic CT, PET and MRI scans of 12 patients with loco-regionally advanced rectal disease. In the first step, bone segmentation of the target image is optimized through local weighted atlas voting. The obtained bone map is then used to assess the quality of deformed atlases to perform voxel-wise weighted atlas fusion. To evaluate the performance of the method, a leave-one-out cross-validation (LOOCV) scheme was devised to find optimal parameters for the model. Geometric evaluation of the produced pseudo-CT images and quantitative analysis of the accuracy of PET AC were performed. Moreover, a dosimetric evaluation of volumetric modulated arc therapy photon treatment plans calculated using the different pseudo-CT images was carried out and compared to those produced using CT images serving as references. The pseudo-CT images produced using the proposed method exhibit bone identification accuracy of 0.89 based on the Dice similarity metric compared to 0.75 achieved by the other atlas-based method. The superior bone extraction resulted in a mean standard uptake value bias of  -1.5  ±  5.0% (mean  ±  SD) in bony structures compared to  -19.9  ±  11.8% and  -8.1  ±  8.2% achieved by MRI segmentation-based (water-only) and atlas-guided AC. Dosimetric evaluation using dose volume histograms and the average difference between minimum/maximum absorbed doses revealed a mean error of less than 1% for the both target volumes and organs at risk. Two-dimensional (2D) gamma analysis of the isocenter dose distributions at 1%/1 mm criterion revealed pass rates of 91.40  ±  7.56%, 96.00  ±  4.11% and 97.67  ±  3.6% for MRI segmentation, atlas-guided and the proposed methods, respectively. The proposed method generates accurate pseudo-CT images from conventional Dixon MRI sequences with improved bone extraction accuracy. The approach is promising for potential use in PET AC and MRI-only or hybrid PET/MRI-guided RT treatment planning.

  2. Atlas-guided generation of pseudo-CT images for MRI-only and hybrid PET-MRI-guided radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Arabi, Hossein; Koutsouvelis, Nikolaos; Rouzaud, Michel; Miralbell, Raymond; Zaidi, Habib

    2016-09-01

    Magnetic resonance imaging (MRI)-guided attenuation correction (AC) of positron emission tomography (PET) data and/or radiation therapy (RT) treatment planning is challenged by the lack of a direct link between MRI voxel intensities and electron density. Therefore, even if this is not a trivial task, a pseudo-computed tomography (CT) image must be predicted from MRI alone. In this work, we propose a two-step (segmentation and fusion) atlas-based algorithm focusing on bone tissue identification to create a pseudo-CT image from conventional MRI sequences and evaluate its performance against the conventional MRI segmentation technique and a recently proposed multi-atlas approach. The clinical studies consisted of pelvic CT, PET and MRI scans of 12 patients with loco-regionally advanced rectal disease. In the first step, bone segmentation of the target image is optimized through local weighted atlas voting. The obtained bone map is then used to assess the quality of deformed atlases to perform voxel-wise weighted atlas fusion. To evaluate the performance of the method, a leave-one-out cross-validation (LOOCV) scheme was devised to find optimal parameters for the model. Geometric evaluation of the produced pseudo-CT images and quantitative analysis of the accuracy of PET AC were performed. Moreover, a dosimetric evaluation of volumetric modulated arc therapy photon treatment plans calculated using the different pseudo-CT images was carried out and compared to those produced using CT images serving as references. The pseudo-CT images produced using the proposed method exhibit bone identification accuracy of 0.89 based on the Dice similarity metric compared to 0.75 achieved by the other atlas-based method. The superior bone extraction resulted in a mean standard uptake value bias of  -1.5  ±  5.0% (mean  ±  SD) in bony structures compared to  -19.9  ±  11.8% and  -8.1  ±  8.2% achieved by MRI segmentation-based (water-only) and atlas-guided AC. Dosimetric evaluation using dose volume histograms and the average difference between minimum/maximum absorbed doses revealed a mean error of less than 1% for the both target volumes and organs at risk. Two-dimensional (2D) gamma analysis of the isocenter dose distributions at 1%/1 mm criterion revealed pass rates of 91.40  ±  7.56%, 96.00  ±  4.11% and 97.67  ±  3.6% for MRI segmentation, atlas-guided and the proposed methods, respectively. The proposed method generates accurate pseudo-CT images from conventional Dixon MRI sequences with improved bone extraction accuracy. The approach is promising for potential use in PET AC and MRI-only or hybrid PET/MRI-guided RT treatment planning.

  3. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontaxis, C; Bol, G; Lagendijk, J

    2016-06-15

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certainmore » percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan adaptation. This research is financially supported by Elekta AB, Stockholm, Sweden.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasquier, David; Lacornerie, Thomas; Vermandel, Maximilien

    Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineatedmore » as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value = 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value 100). Results: For the CTV, the Vr, Vo, and Vc were 1.13 ({+-}0.1 SD), 0.78 ({+-}0.05 SD), and 94.75 ({+-}3.3 SD), respectively. For the rectum, the Vr, Vo, and Vc were 0.97 ({+-}0.1 SD), 0.78 ({+-}0.06 SD), and 86.52 ({+-}5 SD), respectively. For the bladder, the Vr, Vo, and Vc were 0.95 ({+-}0.03 SD), 0.88 ({+-}0.03 SD), and 91.29 ({+-}3.1 SD), respectively. Conclusions: Our results show that the organ-model method is robust, and results in reproducible prostate segmentation with minor interactive corrections. For automatic bladder and rectum delineation, magnetic resonance imaging soft-tissue contrast enables the use of region-growing methods.« less

  5. ZResponse to selection, heritability and genetic correlations between body weight and body size in Pacific white shrimp, Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao; Xiang, Jianhai

    2012-03-01

    To quantify the response to selection, heritability and genetic correlations between weight and size of Litopenaeus vannamei, the body weight (BW), total length (TL), body length (BL), first abdominal segment depth (FASD), third abdominal segment depth (TASD), first abdominal segment width (FASW), and partial carapace length (PCL) of 5-month-old parents and of offspnng were measured by calculating seven body measunngs of offspnng produced by a nested mating design. Seventeen half-sib families and 42 full-sib families of L. vannamei were produced using artificial fertilization from 2-4 dams by each sire, and measured at around five months post-metamorphosis. The results show that hentabilities among vanous traits were high: 0.515±0.030 for body weight and 0.394±0.030 for total length. After one generation of selection. the selection response was 10.70% for offspring growth. In the 5th month, the realized heritability for weight was 0.296 for the offspnng generation. Genetic correlations between body weight and body size were highly variable. The results indicate that external morphological parameters can be applied dunng breeder selection for enhancing the growth without sacrificing animals for determining the body size and breed ability; and selective breeding can be improved significantly, simultaneously with increased production.

  6. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  7. Mobile telephony through LEO satellites: To OBP or not

    NASA Technical Reports Server (NTRS)

    Monte, Paul A.; Louie, Ming; Wiedeman, R.

    1991-01-01

    GLOBALSTAR is a satellite-based mobile communications system that is interoperable with the current and future Public Land Mobile Network (PLMN) and Public Switched Telephone Network (PSTN). The selection of the transponder type, bent-pipe, or onboard processing (OBP), for GLOBALSTAR is based on many criteria, each of which is essential to the commercial and technological feasibility of GLOBALSTAR. The trade study that was done to determine the pros and cons of a bent-pipe transponder or an onboard processing transponder is described. The design of GLOBALSTAR's telecommunications system is a multi-variable cost optimization between the cost and complexity of individual satellites, the number of satellites required to provide coverage to the service areas, the cost of launching the satellites into their selected orbits, the ground segment cost, user equipment cost, satellite voice channel capacity, and other issues. Emphasis is on the cost and complexity of the individual satellites, specifically the transponder type and the impact of the transponder type on satellite and ground segment cost, satellite power and weight, and satellite voice channel capacity.

  8. Mobile telephony through LEO satellites: To OBP or not

    NASA Astrophysics Data System (ADS)

    Monte, Paul A.; Louie, Ming; Wiedeman, R.

    1991-11-01

    GLOBALSTAR is a satellite-based mobile communications system that is interoperable with the current and future Public Land Mobile Network (PLMN) and Public Switched Telephone Network (PSTN). The selection of the transponder type, bent-pipe, or onboard processing (OBP), for GLOBALSTAR is based on many criteria, each of which is essential to the commercial and technological feasibility of GLOBALSTAR. The trade study that was done to determine the pros and cons of a bent-pipe transponder or an onboard processing transponder is described. The design of GLOBALSTAR's telecommunications system is a multi-variable cost optimization between the cost and complexity of individual satellites, the number of satellites required to provide coverage to the service areas, the cost of launching the satellites into their selected orbits, the ground segment cost, user equipment cost, satellite voice channel capacity, and other issues. Emphasis is on the cost and complexity of the individual satellites, specifically the transponder type and the impact of the transponder type on satellite and ground segment cost, satellite power and weight, and satellite voice channel capacity.

  9. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  10. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  11. Subcortical brain segmentation of two dimensional T1-weighted data sets with FMRIB's Integrated Registration and Segmentation Tool (FIRST).

    PubMed

    Amann, Michael; Andělová, Michaela; Pfister, Armanda; Mueller-Lenke, Nicole; Traud, Stefan; Reinhardt, Julia; Magon, Stefano; Bendfeldt, Kerstin; Kappos, Ludwig; Radue, Ernst-Wilhelm; Stippich, Christoph; Sprenger, Till

    2015-01-01

    Brain atrophy has been identified as an important contributing factor to the development of disability in multiple sclerosis (MS). In this respect, more and more interest is focussing on the role of deep grey matter (DGM) areas. Novel data analysis pipelines are available for the automatic segmentation of DGM using three-dimensional (3D) MRI data. However, in clinical trials, often no such high-resolution data are acquired and hence no conclusions regarding the impact of new treatments on DGM atrophy were possible so far. In this work, we used FMRIB's Integrated Registration and Segmentation Tool (FIRST) to evaluate the possibility of segmenting DGM structures using standard two-dimensional (2D) T1-weighted MRI. In a cohort of 70 MS patients, both 2D and 3D T1-weighted data were acquired. The thalamus, putamen, pallidum, nucleus accumbens, and caudate nucleus were bilaterally segmented using FIRST. Volumes were calculated for each structure and for the sum of basal ganglia (BG) as well as for the total DGM. The accuracy and reliability of the 2D data segmentation were compared with the respective results of 3D segmentations using volume difference, volume overlap and intra-class correlation coefficients (ICCs). The mean differences for the individual substructures were between 1.3% (putamen) and -25.2% (nucleus accumbens). The respective values for the BG were -2.7% and for DGM 1.3%. Mean volume overlap was between 89.1% (thalamus) and 61.5% (nucleus accumbens); BG: 84.1%; DGM: 86.3%. Regarding ICC, all structures showed good agreement with the exception of the nucleus accumbens. The results of the segmentation were additionally validated through expert manual delineation of the caudate nucleus and putamen in a subset of the 3D data. In conclusion, we demonstrate that subcortical segmentation of 2D data are feasible using FIRST. The larger subcortical GM structures can be segmented with high consistency. This forms the basis for the application of FIRST in large 2D MRI data sets of clinical trials in order to determine the impact of therapeutic interventions on DGM atrophy in MS.

  12. Evaluation of Road Performance Based on International Roughness Index and Falling Weight Deflectometer

    NASA Astrophysics Data System (ADS)

    Hasanuddin; Setyawan, A.; Yulianto, B.

    2018-03-01

    Assessment to the performance of road pavement is deemed necessary to improve the management quality of road maintenance and rehabilitation. This research to evaluate the road base on functional and structural and recommendations handling done. Assessing the pavement performance is conducted with functional and structural evaluation. Functional evaluation of pavement is based on the value of IRI (International Roughness Index) which among others is derived from reading NAASRA for analysis and recommended road handling. Meanwhile, structural evaluation of pavement is done by analyzing deflection value based on FWD (Falling Weight Deflectometer) data resulting in SN (Structural Number) value. The analysis will result in SN eff (Structural Number Effective) and SN f (Structural Number Future) value obtained from comparing SN eff to SN f value that leads to SCI (Structural Condition Index) value. SCI value implies the possible recommendation for handling pavement. The study done to Simpang Tuan-Batas Kota Jambi road segment was based on functional analysis. The study indicated that the road segment split into 12 segments in which segment 1, 3, 5, 7, 9, and 11 were of regular maintenance, segment 2, 4, 8, 10, 12 belonged to periodic maintenance, and segment 6 was of rehabilitation. The structural analysis resulted in 8 segments consisting of segment 1 and 2 recommended for regular maintenance, segment 3, 4, 5, and 7 for functional overlay, and 6 and 8 were of structural overlay.

  13. Hybrid Active/Passive Jet Engine Noise Suppression System

    NASA Technical Reports Server (NTRS)

    Parente, C. A.; Arcas, N.; Walker, B. E.; Hersh, A. S.; Rice, E. J.

    1999-01-01

    A novel adaptive segmented liner concept has been developed that employs active control elements to modify the in-duct sound field to enhance the tone-suppressing performance of passive liner elements. This could potentially allow engine designs that inherently produce more tone noise but less broadband noise, or could allow passive liner designs to more optimally address high frequency broadband noise. A proof-of-concept validation program was undertaken, consisting of the development of an adaptive segmented liner that would maximize attenuation of two radial modes in a circular or annular duct. The liner consisted of a leading active segment with dual annuli of axially spaced active Helmholtz resonators, followed by an optimized passive liner and then an array of sensing microphones. Three successively complex versions of the adaptive liner were constructed and their performances tested relative to the performance of optimized uniform passive and segmented passive liners. The salient results of the tests were: The adaptive segmented liner performed well in a high flow speed model fan inlet environment, was successfully scaled to a high sound frequency and successfully attenuated three radial modes using sensor and active resonator arrays that were designed for a two mode, lower frequency environment.

  14. Ultra-short beam expander with segmented curvature control: the emergence of a semi-lens

    DOE PAGES

    Abbaslou, Siamak; Gatdula, Robert; Lu, Ming; ...

    2017-01-01

    We introduce direct curvature control in designing a segmented beam expander, and explore novel design possibilities for ultra-compact beam expanders. Assisted by the particle swarm optimization algorithm, we search for an optimal curvature-controlled multi-segment taper that maintains width continuity. Counterintuitively, the optimization yields a structure with abrupt width discontinuity and width compression features. Through spatial phase and parameterized analysis, a semi-lens feature is revealed that helps to flatten the wavefront at the output end for higher coupling efficiency. Such functionality cannot be achieved by normal tapers in a short distance. The structure is fabricated and characterized experimentally. By a figuremore » of merit that accounts for expansion ratio, length, and efficiency, this structure outperforms an adiabatic taper by 9 times.« less

  15. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    PubMed Central

    Luo, Yaozhong; Liu, Longzhong; Li, Xuelong

    2017-01-01

    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703

  16. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  17. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  18. Spherical cloaking using nonlinear transformations for improved segmentation into concentric isotropic coatings.

    PubMed

    Qiu, Cheng-Wei; Hu, Li; Zhang, Baile; Wu, Bae-Ian; Johnson, Steven G; Joannopoulos, John D

    2009-08-03

    Two novel classes of spherical invisibility cloaks based on nonlinear transformation have been studied. The cloaking characteristics are presented by segmenting the nonlinear transformation based spherical cloak into concentric isotropic homogeneous coatings. Detailed investigations of the optimal discretization (e.g., thickness control of each layer, nonlinear factor, etc.) are presented for both linear and nonlinear spherical cloaks and their effects on invisibility performance are also discussed. The cloaking properties and our choice of optimal segmentation are verified by the numerical simulation of not only near-field electric-field distribution but also the far-field radar cross section (RCS).

  19. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  20. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation

    PubMed Central

    Liu, Yang; Liu, Junfei

    2016-01-01

    This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency. PMID:27725826

  2. Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation.

    PubMed

    Liu, Yang; Liu, Junfei; Tian, Liwei; Ma, Lianbo

    2016-01-01

    This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency.

  3. Poster — Thur Eve — 09: Evaluation of electrical impedance and computed tomography fusion algorithms using an anthropomorphic phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff

    2014-08-15

    Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less

  4. Segmentation And Quantification Of Black Holes In Multiple Sclerosis

    PubMed Central

    Datta, Sushmita; Sajja, Balasrinivasa Rao; He, Renjie; Wolinsky, Jerry S.; Gupta, Rakesh K.; Narayana, Ponnada A.

    2006-01-01

    A technique that involves minimal operator intervention was developed and implemented for identification and quantification of black holes on T1- weighted magnetic resonance images (T1 images) in multiple sclerosis (MS). Black holes were segmented on T1 images based on grayscale morphological operations. False classification of black holes was minimized by masking the segmented images with images obtained from the orthogonalization of T2-weighted and T1 images. Enhancing lesion voxels on postcontrast images were automatically identified and eliminated from being included in the black hole volume. Fuzzy connectivity was used for the delineation of black holes. The performance of this algorithm was quantitatively evaluated on 14 MS patients. PMID:16126416

  5. Optimal Co-segmentation of Tumor in PET-CT Images with Context Information

    PubMed Central

    Song, Qi; Bai, Junjie; Han, Dongfeng; Bhatia, Sudershan; Sun, Wenqing; Rockey, William; Bayouth, John E.; Buatti, John M.

    2014-01-01

    PET-CT images have been widely used in clinical practice for radiotherapy treatment planning of the radiotherapy. Many existing segmentation approaches only work for a single imaging modality, which suffer from the low spatial resolution in PET or low contrast in CT. In this work we propose a novel method for the co-segmentation of the tumor in both PET and CT images, which makes use of advantages from each modality: the functionality information from PET and the anatomical structure information from CT. The approach formulates the segmentation problem as a minimization problem of a Markov Random Field (MRF) model, which encodes the information from both modalities. The optimization is solved using a graph-cut based method. Two sub-graphs are constructed for the segmentation of the PET and the CT images, respectively. To achieve consistent results in two modalities, an adaptive context cost is enforced by adding context arcs between the two subgraphs. An optimal solution can be obtained by solving a single maximum flow problem, which leads to simultaneous segmentation of the tumor volumes in both modalities. The proposed algorithm was validated in robust delineation of lung tumors on 23 PET-CT datasets and two head-and-neck cancer subjects. Both qualitative and quantitative results show significant improvement compared to the graph cut methods solely using PET or CT. PMID:23693127

  6. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  7. A novel content-based active contour model for brain tumor segmentation.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2012-06-01

    Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. DTI segmentation by statistical surface evolution.

    PubMed

    Lenglet, Christophe; Rousson, Mikaël; Deriche, Rachid

    2006-06-01

    We address the problem of the segmentation of cerebral white matter structures from diffusion tensor images (DTI). A DTI produces, from a set of diffusion-weighted MR images, tensor-valued images where each voxel is assigned with a 3 x 3 symmetric, positive-definite matrix. This second order tensor is simply the covariance matrix of a local Gaussian process, with zero-mean, modeling the average motion of water molecules. As we will show in this paper, the definition of a dissimilarity measure and statistics between such quantities is a nontrivial task which must be tackled carefully. We claim and demonstrate that, by using the theoretically well-founded differential geometrical properties of the manifold of multivariate normal distributions, it is possible to improve the quality of the segmentation results obtained with other dissimilarity measures such as the Euclidean distance or the Kullback-Leibler divergence. The main goal of this paper is to prove that the choice of the probability metric, i.e., the dissimilarity measure, has a deep impact on the tensor statistics and, hence, on the achieved results. We introduce a variational formulation, in the level-set framework, to estimate the optimal segmentation of a DTI according to the following hypothesis: Diffusion tensors exhibit a Gaussian distribution in the different partitions. We must also respect the geometric constraints imposed by the interfaces existing among the cerebral structures and detected by the gradient of the DTI. We show how to express all the statistical quantities for the different probability metrics. We validate and compare the results obtained on various synthetic data-sets, a biological rat spinal cord phantom and human brain DTIs.

  9. Satellite switched FDMA advanced communication technology satellite program

    NASA Technical Reports Server (NTRS)

    Atwood, S.; Higton, G. H.; Wood, K.; Kline, A.; Furiga, A.; Rausch, M.; Jan, Y.

    1982-01-01

    The satellite switched frequency division multiple access system provided a detailed system architecture that supports a point to point communication system for long haul voice, video and data traffic between small Earth terminals at Ka band frequencies at 30/20 GHz. A detailed system design is presented for the space segment, small terminal/trunking segment at network control segment for domestic traffic model A or B, each totaling 3.8 Gb/s of small terminal traffic and 6.2 Gb/s trunk traffic. The small terminal traffic (3.8 Gb/s) is emphasized, for the satellite router portion of the system design, which is a composite of thousands of Earth stations with digital traffic ranging from a single 32 Kb/s CVSD voice channel to thousands of channels containing voice, video and data with a data rate as high as 33 Mb/s. The system design concept presented, effectively optimizes a unique frequency and channelization plan for both traffic models A and B with minimum reorganization of the satellite payload transponder subsystem hardware design. The unique zoning concept allows multiple beam antennas while maximizing multiple carrier frequency reuse. Detailed hardware design estimates for an FDMA router (part of the satellite transponder subsystem) indicate a weight and dc power budget of 353 lbs, 195 watts for traffic model A and 498 lbs, 244 watts for traffic model B.

  10. Relationships between neonatal weight, limb lengths, skinfold thicknesses, body breadths and circumferences in an Australian cohort.

    PubMed

    Pomeroy, Emma; Stock, Jay T; Cole, Tim J; O'Callaghan, Michael; Wells, Jonathan C K

    2014-01-01

    Low birth weight has been consistently associated with adult chronic disease risk. The thrifty phenotype hypothesis assumes that reduced fetal growth impacts some organs more than others. However, it remains unclear how birth weight relates to different body components, such as circumferences, adiposity, body segment lengths and limb proportions. We hypothesized that these components vary in their relationship to birth weight. We analysed the relationship between birth weight and detailed anthropometry in 1270 singleton live-born neonates (668 male) from the Mater-University of Queensland Study of Pregnancy (Brisbane, Australia). We tested adjusted anthropometry for correlations with birth weight. We then performed stepwise multiple regression on birth weight of: body lengths, breadths and circumferences; relative limb to neck-rump proportions; or skinfold thicknesses. All analyses were adjusted for sex and gestational age, and used logged data. Circumferences, especially chest, were most strongly related to birth weight, while segment lengths (neck-rump, thigh, upper arm, and especially lower arm and lower leg) were relatively weakly related to birth weight, and limb lengths relative to neck-rump length showed no relationship. Skinfolds accounted for 36% of birth weight variance, but adjusting for size (neck-rump, thigh and upper arm lengths, and head circumference), this decreased to 10%. There was no evidence that heavier babies had proportionally thicker skinfolds. Neonatal body measurements vary in their association with birth weight: head and chest circumferences showed the strongest associations while limb segment lengths did not relate strongly to birth weight. After adjusting for body size, subcutaneous fatness accounted for a smaller proportion of birth weight variance than previously reported. While heavier babies had absolutely thicker skinfolds, this was proportional to their size. Relative limb to trunk length was unrelated to birth weight, suggesting that limb proportions at birth do not index factors relevant to prenatal life.

  11. Relationships between Neonatal Weight, Limb Lengths, Skinfold Thicknesses, Body Breadths and Circumferences in an Australian Cohort

    PubMed Central

    Pomeroy, Emma; Stock, Jay T.; Cole, Tim J.; O'Callaghan, Michael; Wells, Jonathan C. K.

    2014-01-01

    Background Low birth weight has been consistently associated with adult chronic disease risk. The thrifty phenotype hypothesis assumes that reduced fetal growth impacts some organs more than others. However, it remains unclear how birth weight relates to different body components, such as circumferences, adiposity, body segment lengths and limb proportions. We hypothesized that these components vary in their relationship to birth weight. Methods We analysed the relationship between birth weight and detailed anthropometry in 1270 singleton live-born neonates (668 male) from the Mater-University of Queensland Study of Pregnancy (Brisbane, Australia). We tested adjusted anthropometry for correlations with birth weight. We then performed stepwise multiple regression on birth weight of: body lengths, breadths and circumferences; relative limb to neck-rump proportions; or skinfold thicknesses. All analyses were adjusted for sex and gestational age, and used logged data. Results Circumferences, especially chest, were most strongly related to birth weight, while segment lengths (neck-rump, thigh, upper arm, and especially lower arm and lower leg) were relatively weakly related to birth weight, and limb lengths relative to neck-rump length showed no relationship. Skinfolds accounted for 36% of birth weight variance, but adjusting for size (neck-rump, thigh and upper arm lengths, and head circumference), this decreased to 10%. There was no evidence that heavier babies had proportionally thicker skinfolds. Conclusions Neonatal body measurements vary in their association with birth weight: head and chest circumferences showed the strongest associations while limb segment lengths did not relate strongly to birth weight. After adjusting for body size, subcutaneous fatness accounted for a smaller proportion of birth weight variance than previously reported. While heavier babies had absolutely thicker skinfolds, this was proportional to their size. Relative limb to trunk length was unrelated to birth weight, suggesting that limb proportions at birth do not index factors relevant to prenatal life. PMID:25162658

  12. 3D conformal planning using low segment multi-criteria IMRT optimization

    PubMed Central

    Khan, Fazal; Craft, David

    2014-01-01

    Purpose To evaluate automated multicriteria optimization (MCO) – designed for intensity modulated radiation therapy (IMRT), but invoked with limited segmentation – to efficiently produce high quality 3D conformal radiation therapy (3D-CRT) plans. Methods Ten patients previously planned with 3D-CRT to various disease sites (brain, breast, lung, abdomen, pelvis), were replanned with a low-segment inverse multicriteria optimized technique. The MCO-3D plans used the same beam geometry of the original 3D plans, but were limited to an energy of 6 MV. The MCO-3D plans were optimized using fluence-based MCO IMRT and then, after MCO navigation, segmented with a low number of segments. The 3D and MCO-3D plans were compared by evaluating mean dose for all structures, D95 (dose that 95% of the structure receives) and homogeneity indexes for targets, D1 and clinically appropriate dose volume objectives for individual organs at risk (OARs), monitor units (MUs), and physician preference. Results The MCO-3D plans reduced the OAR mean doses (41 out of a total of 45 OARs had a mean dose reduction, p<<0.01) and monitor units (seven out of ten plans have reduced MUs; the average reduction is 17%, p=0.08) while maintaining clinical standards on coverage and homogeneity of target volumes. All MCO-3D plans were preferred by physicians over their corresponding 3D plans. Conclusion High quality 3D plans can be produced using MCO-IMRT optimization, resulting in automated field-in-field type plans with good monitor unit efficiency. Adopting this technology in a clinic could improve plan quality, and streamline treatment plan production by utilizing a single system applicable to both IMRT and 3D planning. PMID:25413405

  13. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  14. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  15. Optimal reconstruction interval in dual source CT coronary angiography: a single-center experience in 285 patients

    PubMed Central

    Akgöz, Ayça; Akata, Deniz; Hazırolan, Tuncay; Karçaaltıncaba, Muşturay

    2014-01-01

    PURPOSE We aimed to evaluate the visibility of coronary arteries and bypass-grafts in patients who underwent dual source computed tomography (DSCT) angiography without heart rate (HR) control and to determine optimal intervals for image reconstruction. MATERIALS AND METHODS A total of 285 consecutive cases who underwent coronary (n=255) and bypass-graft (n=30) DSCT angiography at our institution were identified retrospectively. Patients with atrial fibrillation were excluded. Ten datasets in 10% increments were reconstructed in all patients. On each dataset, the visibility of coronary arteries was evaluated using the 15-segment American Heart Association classification by two radiologists in consensus. RESULTS Mean HR was 76±16.3 bpm, (range, 46–127 bpm). All coronary segments could be visualized in 277 patients (97.19%). On a segment-basis, 4265 of 4275 (99.77%) coronary artery segments were visible. All segments of 56 bypass-grafts in 30 patients were visible (100%). Total mean segment visibility scores of all coronary arteries were highest at 70%, 40%, and 30% intervals for all HRs. The optimal reconstruction intervals to visualize the segments of all three coronary arteries in descending order were 70%, 60%, 80%, and 30% intervals in patients with a mean HR <70 bpm; 40%, 70%, and 30% intervals in patients with a mean HR 70–100 bpm; and 40%, 50%, and 30% in patients with a mean HR >100 bpm. CONCLUSION Without beta-blocker administration, DSCT coronary angiography offers excellent visibility of vascular segments using both end-systolic and mid-late diastolic reconstructions at HRs up to 100 bpm, and only end-systolic reconstructions at HRs over 100 bpm. PMID:24834490

  16. Shape-driven 3D segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2006-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.

  17. Pass the popcorn: "obesogenic" behaviors and stigma in children's movies.

    PubMed

    Throop, Elizabeth M; Skinner, Asheley Cockrell; Perrin, Andrew J; Steiner, Michael J; Odulana, Adebowale; Perrin, Eliana M

    2014-07-01

    To determine the prevalence of obesity-related behaviors and attitudes in children's movies. A mixed-methods study of the top-grossing G- and PG-rated movies, 2006-2010 (4 per year) was performed. For each 10-min movie segment, the following were assessed: 1) prevalence of key nutrition and physical activity behaviors corresponding to the American Academy of Pediatrics obesity prevention recommendations for families; 2) prevalence of weight stigma; 3) assessment as healthy, unhealthy, or neutral; 3) free-text interpretations of stigma. Agreement between coders was >85% (Cohen's kappa = 0.7), good for binary responses. Segments with food depicted: exaggerated portion size (26%); unhealthy snacks (51%); sugar-sweetened beverages (19%). Screen time was also prevalent (40% of movies showed television; 35% computer; 20% video games). Unhealthy segments outnumbered healthy segments 2:1. Most (70%) of the movies included weight-related stigmatizing content (e.g., "That fat butt! Flabby arms! And this ridiculous belly!"). These popular children's movies had significant "obesogenic" content, and most contained weight-based stigma. They present a mixed message to children, promoting unhealthy behaviors while stigmatizing the behaviors' possible effects. Further research is needed to determine the effects of such messages on children. Copyright © 2013 The Obesity Society.

  18. Memoryless cooperative graph search based on the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Hou, Jian; Yan, Gang-Feng; Fan, Zhen

    2011-04-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  19. Spectral optimized asymmetric segmented phase-only correlation filter.

    PubMed

    Leonard, I; Alfalou, A; Brosseau, C

    2012-05-10

    We suggest a new type of optimized composite filter, i.e., the asymmetric segmented phase-only filter (ASPOF), for improving the effectiveness of a VanderLugt correlator (VLC) when used for face identification. Basically, it consists in merging several reference images after application of a specific spectral optimization method. After segmentation of the spectral filter plane to several areas, each area is assigned to a single winner reference according to a new optimized criterion. The point of the paper is to show that this method offers a significant performance improvement on standard composite filters for face identification. We first briefly revisit composite filters [adapted, phase-only, inverse, compromise optimal, segmented, minimum average correlation energy, optimal trade-off maximum average correlation, and amplitude-modulated phase-only (AMPOF)], which are tools of choice for face recognition based on correlation techniques, and compare their performances with those of the ASPOF. We illustrate some of the drawbacks of current filters for several binary and grayscale image identifications. Next, we describe the optimization steps and introduce the ASPOF that can overcome these technical issues to improve the quality and the reliability of the correlation-based decision. We derive performance measures, i.e., PCE values and receiver operating characteristic curves, to confirm consistency of the results. We numerically find that this filter increases the recognition rate and decreases the false alarm rate. The results show that the discrimination of the ASPOF is comparable to that of the AMPOF, but the ASPOF is more robust than the trade-off maximum average correlation height against rotation and various types of noise sources. Our method has several features that make it amenable to experimental implementation using a VLC.

  20. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  1. Tracking cells in Life Cell Imaging videos using topological alignments.

    PubMed

    Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing

    2009-07-16

    With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  2. The Segmental Morphometric Properties of the Horse Cervical Spinal Cord: A Study of Cadaver

    PubMed Central

    Bahar, Sadullah; Bolat, Durmus; Selcuk, Muhammet Lutfi

    2013-01-01

    Although the cervical spinal cord (CSC) of the horse has particular importance in diseases of CNS, there is very little information about its segmental morphometry. The objective of the present study was to determine the morphometric features of the CSC segments in the horse and possible relationships among the morphometric features. The segmented CSC from five mature animals was used. Length, weight, diameter, and volume measurements of the segments were performed macroscopically. Lengths and diameters of segments were measured histologically, and area and volume measurements were performed using stereological methods. The length, weight, and volume of the CSC were 61.6 ± 3.2 cm, 107.2 ± 10.4 g, and 95.5 ± 8.3 cm3, respectively. The length of the segments was increased from C 1 to C 3, while it decreased from C 3 to C 8. The gross section (GS), white matter (WM), grey matter (GM), dorsal horn (DH), and ventral horn (VH) had the largest cross-section areas at C 8. The highest volume was found for the total segment and WM at C 4, GM, DH, and VH at C 7, and the central canal (CC) at C 3. The data obtained not only contribute to the knowledge of the normal anatomy of the CSC but may also provide reference data for veterinary pathologists and clinicians. PMID:23476145

  3. Obesogenic Behavior and Weight-Based Stigma in Popular Children's Movies, 2012 to 2015.

    PubMed

    Howard, Janna B; Skinner, Asheley Cockrell; Ravanbakht, Sophie N; Brown, Jane D; Perrin, Andrew J; Steiner, Michael J; Perrin, Eliana M

    2017-12-01

    Obesity-promoting content and weight-stigmatizing messages are common in child-directed television programming and advertisements, and 1 study found similar trends in G- and PG-rated movies from 2006 to 2010. Our objective was to examine the prevalence of such content in more recent popular children's movies. Raters examined 31 top-grossing G- and PG-rated movies released from 2012 to 2015. For each 10-minute segment ( N = 302) and for movies as units, raters documented the presence of eating-, activity-, and weight-related content observed on-screen. To assess interrater reliability, 10 movies (32%) were coded by more than 1 rater. The result of Cohen's κ test of agreement among 3 raters was 0.65 for binary responses (good agreement). All 31 movies included obesity-promoting content; most common were unhealthy foods (87% of movies, 42% of segments), exaggerated portion sizes (71%, 29%), screen use (68%, 38%), and sugar-sweetened beverages (61%, 24%). Weight-based stigma, such as a verbal insult about body size or weight, was observed in 84% of movies and 30% of segments. Children's movies include much obesogenic and weight-stigmatizing content. These messages are not shown in isolated incidences; rather, they often appear on-screen multiple times throughout the entire movie. Future research should explore these trends over time, and their effects. Copyright © 2017 by the American Academy of Pediatrics.

  4. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  5. Automatized spleen segmentation in non-contrast-enhanced MR volume data using subject-specific shape priors

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry

    2017-07-01

    To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.

  6. Splenomegaly Segmentation using Global Convolutional Kernels and Conditional Generative Adversarial Networks

    PubMed Central

    Huo, Yuankai; Xu, Zhoubing; Bao, Shunxing; Bermudez, Camilo; Plassard, Andrew J.; Liu, Jiaqi; Yao, Yuang; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.

    2018-01-01

    Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.

  7. Pancreas segmentation from 3D abdominal CT images using patient-specific weighted subspatial probabilistic atlases

    NASA Astrophysics Data System (ADS)

    Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku

    2015-03-01

    Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.

  8. Electric field theory based approach to search-direction line definition in image segmentation: application to optimal femur-tibia cartilage segmentation in knee-joint 3-D MR

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Sonka, M.

    2010-03-01

    A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).

  9. Left ventricle segmentation via graph cut distribution matching.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron

    2009-01-01

    We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

  10. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  11. Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images.

    PubMed

    Jian, Junming; Xiong, Fei; Xia, Wei; Zhang, Rui; Gu, Jinhui; Wu, Xiaodong; Meng, Xiaochun; Gao, Xin

    2018-06-01

    Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.

  12. Artifact Suppression in Imaging of Myocardial Infarction Using B1-Weighted Phased-Array Combined Phase-Sensitive Inversion Recovery

    PubMed Central

    Kellman, Peter; Dyke, Christopher K.; Aletras, Anthony H.; McVeigh, Elliot R.; Arai, Andrew E.

    2007-01-01

    Regions of the body with long T1, such as cerebrospinal fluid (CSF), may create ghost artifacts on gadolinium-hyperenhanced images of myocardial infarction when inversion recovery (IR) sequences are used with a segmented acquisition. Oscillations in the transient approach to steady state for regions with long T1 may cause ghosts, with the number of ghosts being equal to the number of segments. B1-weighted phased-array combining provides an inherent degree of ghost artifact suppression because the ghost artifact is weighted less than the desired signal intensity by the coil sensitivity profiles. Example images are shown that illustrate the suppression of CSF ghost artifacts by the use of B1-weighted phased-array combining of multiple receiver coils. PMID:14755669

  13. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  14. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  15. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  16. Myocardial Infarct Segmentation from Magnetic Resonance Images for Personalized Modeling of Cardiac Electrophysiology

    PubMed Central

    Ukwatta, Eranga; Arevalo, Hermenegild; Li, Kristina; Yuan, Jing; Qiu, Wu; Malamas, Peter; Wu, Katherine C.

    2016-01-01

    Accurate representation of myocardial infarct geometry is crucial to patient-specific computational modeling of the heart in ischemic cardiomyopathy. We have developed a methodology for segmentation of left ventricular (LV) infarct from clinically acquired, two-dimensional (2D), late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, for personalized modeling of ventricular electrophysiology. The infarct segmentation was expressed as a continuous min-cut optimization problem, which was solved using its dual formulation, the continuous max-flow (CMF). The optimization objective comprised of a smoothness term, and a data term that quantified the similarity between image intensity histograms of segmented regions and those of a set of training images. A manual segmentation of the LV myocardium was used to initialize and constrain the developed method. The three-dimensional geometry of infarct was reconstructed from its segmentation using an implicit, shape-based interpolation method. The proposed methodology was extensively evaluated using metrics based on geometry, and outcomes of individualized electrophysiological simulations of cardiac dys(function). Several existing LV infarct segmentation approaches were implemented, and compared with the proposed method. Our results demonstrated that the CMF method was more accurate than the existing approaches in reproducing expert manual LV infarct segmentations, and in electrophysiological simulations. The infarct segmentation method we have developed and comprehensively evaluated in this study constitutes an important step in advancing clinical applications of personalized simulations of cardiac electrophysiology. PMID:26731693

  17. Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Lepsch, Roger A., Jr.

    2000-01-01

    Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.

  18. Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI.

    PubMed

    Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana

    2018-01-01

    Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n  = 20), quantitatively and qualitatively in relatively healthy older subjects ( n  = 96), and qualitatively in patients from a memory clinic ( n  = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ  = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.

  19. A novel software and conceptual design of the hardware platform for intensity modulated radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dan; Ruan, Dan; O’Connor, Daniel

    Purpose: To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. Methods: A total of seven patients—two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung—were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-basedmore » IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle–Pock algorithm, a first-order primal–dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Results: Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. Conclusions: The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.« less

  20. A novel software and conceptual design of the hardware platform for intensity modulated radiation therapy.

    PubMed

    Nguyen, Dan; Ruan, Dan; O'Connor, Daniel; Woods, Kaley; Low, Daniel A; Boucher, Salime; Sheng, Ke

    2016-02-01

    To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. A total of seven patients-two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung-were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-based IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle-Pock algorithm, a first-order primal-dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.

  1. Shape-Driven 3D Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875

  2. Incorporation of physical constraints in optimal surface search for renal cortex segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie

    2012-02-01

    In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.

  3. Compatibility of segmented thermoelectric generators

    NASA Technical Reports Server (NTRS)

    Snyder, J.; Ursell, T.

    2002-01-01

    It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.

  4. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  5. Magnet system optimization for segmented adaptive-gap in-vacuum undulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitegi, C., E-mail: ckitegi@bnl.gov; Chubar, O.; Eng, C.

    2016-07-27

    Segmented Adaptive Gap in-vacuum Undulator (SAGU), in which different segments have different gaps and periods, promises a considerable spectral performance gain over a conventional undulator with uniform gap and period. According to calculations, this gain can be comparable to the gain achievable with a superior undulator technology (e.g. a room-temperature in-vacuum hybrid SAGU would perform as a cryo-cooled hybrid in-vacuum undulator with uniform gap and period). However, for reaching the high spectral performance, SAGU magnetic design has to include compensation of kicks experienced by the electron beam at segment junctions because of different deflection parameter values in the segments. Wemore » show that such compensation to large extent can be accomplished by using a passive correction, however, simple correction coils are nevertheless required as well to reach perfect compensation over a whole SAGU tuning range. Magnetic optimizations performed with Radia code, and the resulting undulator radiation spectra calculated using SRW code, demonstrating a possibility of nearly perfect correction, are presented.« less

  6. Quadrature amplitude modulation (QAM) using binary-driven coupling-modulated rings

    NASA Astrophysics Data System (ADS)

    Karimelahi, Samira; Sheikholeslami, Ali

    2016-05-01

    We propose and fully analyze a compact structure for DAC-free pure optical QAM modulation. The proposed structure is the first ring resonator-based DAC-free QAM modulator reported in the literature, to the best of our knowledge. The device consists of two segmented add-drop Mach Zehnder interferometer-assisted ring modulators (MZIARM) in an IQ configuration. The proposed architecture is investigated based on the parameters from SOI technology where various key design considerations are discussed. We have included the loss in the MZI arms in our analysis of phase and amplitude modulation using MZIARM for the first time and show that the imbalanced loss results in a phase error. The output level linearity is also studied for both QAM-16 and QAM-64 not only based on optimizing RF segment lengths but also by optimizing the number of segments. In QAM-16, linearity among levels is achievable with two segments while in QAM-64 an additional segment may be required.

  7. Fondaparinux with UnfracTionated heparin dUring Revascularization in Acute coronary syndromes (FUTURA/OASIS 8): a randomized trial of intravenous unfractionated heparin during percutaneous coronary intervention in patients with non-ST-segment elevation acute coronary syndromes initially treated with fondaparinux.

    PubMed

    Steg, Philippe Gabriel; Mehta, Shamir; Jolly, Sanjit; Xavier, Denis; Rupprecht, Hans-Juergen; Lopez-Sendon, Jose Luis; Chrolavicius, Susan; Rao, Sunil V; Granger, Christopher B; Pogue, Janice; Laing, Shiona; Yusuf, Salim

    2010-12-01

    There is uncertainty regarding the optimal adjunctive unfractionated heparin (UFH) regimen for percutaneous coronary intervention (PCI) in patients with non-ST-segment elevation acute coronary syndrome (NSTE-ACS) treated with fondaparinux. The aim of this study is to evaluate the safety of 2 dose regimens of adjunctive intravenous UFH during PCI in high-risk patients with NSTE-ACS initially treated with fondaparinux and referred for early coronary angiography. This is an international prospective cohort study of approximately 4,000 high-risk patients presenting to hospital with unstable angina or non-ST-segment elevation myocardial infarction, treated with fondaparinux as initial medical therapy, and referred for early coronary angiography with a view to revascularization. Within this cohort, 2,000 patients undergoing PCI will be eligible for enrollment into a double-blind international randomized parallel-group trial evaluating standard activated clotting time (ACT)-guided doses of intravenous UFH versus a non-ACT-guided weight-adjusted low dose. The standard regimen uses an 85-U/kg bolus of UFH if there is no platelet glycoprotein IIb/IIIa (GpIIb-IIIa) inhibitor or 60 U/kg if GpIIb-IIIa inhibitor use is planned, with additional bolus guided by blinded ACT measurements. The low-dose regimen uses a 50 U/kg UFH bolus, irrespective of planned GpIIb-IIIa use. The primary outcome is the composite of peri-PCI major bleeding, minor bleeding, or major vascular access site complications. The assessment of net clinical benefit is a key secondary outcome: it addresses the composite of peri-PCI major bleeding with death, myocardial infarction, or target vessel revascularization at day 30. FUTURA/OASIS 8 will help define the optimal UFH regimen as adjunct to PCI in high-risk NSTE-ACS patients treated with fondaparinux. Copyright © 2010 Mosby, Inc. All rights reserved.

  8. High Efficiency Thermoelectric Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    El-Genk, Mohamed; Saber, Hamed; Caillat, Thierry

    2004-01-01

    The work performed and whose results presented in this report is a joint effort between the University of New Mexico s Institute for Space and Nuclear Power Studies (ISNPS) and the Jet Propulsion Laboratory (JPL), California Institute of Technology. In addition to the development, design, and fabrication of skutterudites and skutterudites-based segmented unicouples this effort included conducting performance tests of these unicouples for hundreds of hours to verify theoretical predictions of the conversion efficiency. The performance predictions of these unicouples are obtained using 1-D and 3-D models developed for that purpose and for estimating the actual performance and side heat losses in the tests conducted at ISNPS. In addition to the performance tests, the development of the 1-D and 3-D models and the development of Advanced Radioisotope Power systems for Beginning-Of-Life (BOM) power of 108 We are carried out at ISNPS. The materials synthesis and fabrication of the unicouples are carried out at JPL. The research conducted at ISNPS is documented in chapters 2-5 and that conducted at JP, in documented in chapter 5. An important consideration in the design and optimization of segmented thermoelectric unicouples (STUs) is determining the relative lengths, cross-section areas, and the interfacial temperatures of the segments of the different materials in the n- and p-legs. These variables are determined using a genetic algorithm (GA) in conjunction with one-dimensional analytical model of STUs that is developed in chapter 2. Results indicated that when optimized for maximum conversion efficiency, the interfacial temperatures between various segments in a STU are close to those at the intersections of the Figure-Of-Merit (FOM), ZT, curves of the thermoelectric materials of the adjacent segments. When optimizing the STUs for maximum electrical power density, however, the interfacial temperatures are different from those at the intersections of the ZT curves, but close to those at the intersections the characteristic power, CP, curves of the thermoelectric materials of the adjacent segments (CP = T(sup 2)Zk and has a unit of W/m). Results also showed that the number of the segments in the n- and p-legs of the STUs optimized for maximum power density are generally fewer than when the same unicouples are optimized for maximum efficiency. These results are obtained using the 1-D optimization model of STUs that is detailed in chapter 2. A three-dimensional model of STUs is developed and incorporated into the ANSYS commercial software (chapter 3). The governing equations are solved, subject to the prescribed

  9. Comparison of T1-weighted 2D TSE, 3D SPGR, and two-point 3D Dixon MRI for automated segmentation of visceral adipose tissue at 3 Tesla.

    PubMed

    Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin

    2017-04-01

    To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.

  10. Segmentation of overweight Americans and opportunities for social marketing

    PubMed Central

    Kolodinsky, Jane; Reynolds, Travis

    2009-01-01

    Background The food industry uses market segmentation to target products toward specific groups of consumers with similar attitudinal, demographic, or lifestyle characteristics. Our aims were to identify distinguishable segments within the US overweight population to be targeted with messages and media aimed at moving Americans toward more healthy weights. Methods Cluster analysis was used to identify segments of consumers based on both food and lifestyle behaviors related to unhealthy weights. Drawing from Social Learning Theory, the Health Belief Model, and existing market segmentation literature, the study identified five distinct, recognizable market segments based on knowledge and behavioral and environmental factors. Implications for social marketing campaigns designed to move Americans toward more healthy weights were explored. Results The five clusters identified were: Highest Risk (19%); At Risk (22%); Right Behavior/Wrong Results (33%); Getting Best Results (13%); and Doing OK (12%). Ninety-nine percent of those in the Highest Risk cluster were overweight; members watched the most television and exercised the least. Fifty-five percent of those in the At Risk cluster were overweight; members logged the most computer time and almost half rarely or never read food labels. Sixty-six percent of those in the Right Behavior/Wrong Results cluster were overweight; however, 95% of them were familiar with the food pyramid. Members reported eating a low percentage of fast food meals (8%) compared to other groups but a higher percentage of other restaurant meals (15%). Less than six percent of those in the Getting Best Results cluster were overweight; every member read food labels and 75% of members' meals were "made from scratch." Eighteen percent of those in the Doing OK cluster were overweight; members watched the least television and reported eating 78% of their meals "made from scratch." Conclusion This study demonstrated that five distinct market segments can be identified for social marketing efforts aimed at addressing the obesity epidemic. Through the identification of these five segments, social marketing campaigns can utilize selected channels and messages that communicate the most relevant and important information. The results of this study offer insight into how segmentation strategies and social marketing messages may improve public health. PMID:19267936

  11. Segmentation of overweight Americans and opportunities for social marketing.

    PubMed

    Kolodinsky, Jane; Reynolds, Travis

    2009-03-08

    The food industry uses market segmentation to target products toward specific groups of consumers with similar attitudinal, demographic, or lifestyle characteristics. Our aims were to identify distinguishable segments within the US overweight population to be targeted with messages and media aimed at moving Americans toward more healthy weights. Cluster analysis was used to identify segments of consumers based on both food and lifestyle behaviors related to unhealthy weights. Drawing from Social Learning Theory, the Health Belief Model, and existing market segmentation literature, the study identified five distinct, recognizable market segments based on knowledge and behavioral and environmental factors. Implications for social marketing campaigns designed to move Americans toward more healthy weights were explored. The five clusters identified were: Highest Risk (19%); At Risk (22%); Right Behavior/Wrong Results (33%); Getting Best Results (13%); and Doing OK (12%). Ninety-nine percent of those in the Highest Risk cluster were overweight; members watched the most television and exercised the least. Fifty-five percent of those in the At Risk cluster were overweight; members logged the most computer time and almost half rarely or never read food labels. Sixty-six percent of those in the Right Behavior/Wrong Results cluster were overweight; however, 95% of them were familiar with the food pyramid. Members reported eating a low percentage of fast food meals (8%) compared to other groups but a higher percentage of other restaurant meals (15%). Less than six percent of those in the Getting Best Results cluster were overweight; every member read food labels and 75% of members' meals were "made from scratch." Eighteen percent of those in the Doing OK cluster were overweight; members watched the least television and reported eating 78% of their meals "made from scratch." This study demonstrated that five distinct market segments can be identified for social marketing efforts aimed at addressing the obesity epidemic. Through the identification of these five segments, social marketing campaigns can utilize selected channels and messages that communicate the most relevant and important information. The results of this study offer insight into how segmentation strategies and social marketing messages may improve public health.

  12. Interposition of a reversed jejunal segment enhances intestinal adaptation in short bowel syndrome: an experimental study on pigs.

    PubMed

    Digalakis, Michail; Papamichail, Michail; Glava, Chryssoula; Grammatoglou, Xanthippi; Sergentanis, Theodoros N; Papalois, Apostolos; Bramis, John

    2011-12-01

    Interposition of a reversed intestinal segment as a factor facilitating intestinal adaptation has been experimentally investigated. Controversy exists about its efficacy in terms of body weight improvement, direction of luminal changes, and underlying mechanisms. This study aims to provide a comprehensive approach. The pigs were randomly allocated to two groups: (1) short bowel (SB) group (n=8) and (2) short bowel reverse jejunal segment (SB-RS) group (n=8). On postoperative d 3, 30, and 60, intestinal transit time was measured; body weight and serum albumin were measured on baseline, as well as on postoperative d 30 and 60. After sacrifice, histopathologic and immunohistochemical (PCNA, activated caspase-3) evaluation followed. Transit time was numerically longer in SB-RS group at all time points; the difference reached statistical significance on d 60. No statistically significant differences were observed concerning body weight or serum albumin. In the SB-RS group, a statistically significant increase in muscle thickness, crypt depth, villus height, and PCNA immunostaining, and a decrease in caspase-3 positive (+) cell count were documented both at the jejunal and ileal level. The reversed jejunal segment seemed able to enhance intestinal adaptation at a histopathologic level, as well as to favorably modify transit time. These putatively beneficial actions were not reflected upon body weight. The decrease in apoptosis was caspase-3-dependent. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  13. Low-thrust trajectory optimization of asteroid sample return mission with multiple revolutions and moon gravity assists

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, FanHuag; Li, JunFeng

    2015-11-01

    Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.

  14. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  15. Image segmentation using local shape and gray-level appearance models

    NASA Astrophysics Data System (ADS)

    Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul

    2006-03-01

    A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.

  16. An objective approach to determining the weight ranges of prey preferred by and accessible to the five large African carnivores.

    PubMed

    Clements, Hayley S; Tambling, Craig J; Hayward, Matt W; Kerley, Graham I H

    2014-01-01

    Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points) with high rates of detection (75% to 100% of simulations, depending on number of break-points) and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point). When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed "accessible prey". Accessible prey weight ranges were found to be 14-135 kg for cheetah Acinonyx jubatus, 1-45 kg for leopard Panthera pardus, 32-632 kg for lion Panthera leo, 15-1600 kg for spotted hyaena Crocuta crocuta and 10-289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah), 82±3% (leopard), 81±2% (lion), 97±2% (spotted hyaena) and 96±2% (wild dog) of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore species.

  17. Using on-site liver 3-D reconstruction and volumetric calculations in split liver transplantation.

    PubMed

    Reichman, Trevor W; Fiorello, Brittany; Carmody, Ian; Bohorquez, Humberto; Cohen, Ari; Seal, John; Bruce, David; Loss, George E

    2016-12-01

    Split liver transplantation increases the number of grafts available for transplantation. Pre-recovery assessment of liver graft volume is essential for selecting suitable recipients. The purpose of this study was to determine the ability and feasibility of constructing a 3-D model to aid in surgical planning and to predict graft weight prior to an in situ division of the donor liver. Over 11 months, 3-D volumetric reconstruction of 4 deceased donors was performed using Pathfinder Scout© liver volumetric software. Demographic, laboratory, operative, perioperative and survival data for these patients along with donor demographic data were collected prospectively and analyzed retrospectively. The average predicted weight of the grafts from the adult donors obtained from an in situ split procedure were 1130 g (930-1458 g) for the extended right lobe donors and 312 g (222-396 g) for left lateral segment grafts. Actual adult graft weight was 92% of the predicted weight for both the extended right grafts and the left lateral segment grafts. The predicted and actual graft weights for the pediatric donors were 176 g and 210 g for the left lateral segment grafts and 308 g and 280 g for the extended right lobe grafts, respectively. All grafts were transplanted except for the right lobe from the pediatric donors due to the small graft weight. On-site volumetric assessment of donors provides useful information for the planning of an in situ split and for selection of recipients. This information may expand the donor pool to recipients previously felt to be unsuitable due to donor and/or recipient weight.

  18. A model-based approach for estimation of changes in lumbar segmental kinematics associated with alterations in trunk muscle forces.

    PubMed

    Shojaei, Iman; Arjmand, Navid; Meakin, Judith R; Bazrgari, Babak

    2018-03-21

    The kinematics information from imaging, if combined with optimization-based biomechanical models, may provide a unique platform for personalized assessment of trunk muscle forces (TMFs). Such a method, however, is feasible only if differences in lumbar spine kinematics due to differences in TMFs can be captured by the current imaging techniques. A finite element model of the spine within an optimization procedure was used to estimate segmental kinematics of lumbar spine associated with five different sets of TMFs. Each set of TMFs was associated with a hypothetical trunk neuromuscular strategy that optimized one aspect of lower back biomechanics. For each set of TMFs, the segmental kinematics of lumbar spine was estimated for a single static trunk flexed posture involving, respectively, 40° and 10° of thoracic and pelvic rotations. Minimum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 0° to 0.7° and 0 mm to 0.04 mm, respectively. Maximum changes in the angular and translational deformations of a motion segment with alterations in TMFs ranged from 2.4° to 7.6° and 0.11 mm to 0.39 mm, respectively. The differences in kinematics of lumbar segments between each combination of two sets of TMFs in 97% of cases for angular deformation and 55% of cases for translational deformation were within the reported accuracy of current imaging techniques. Therefore, it might be possible to use image-based kinematics of lumbar segments along with computational modeling for personalized assessment of TMFs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.

    2017-02-01

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  20. Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.

    PubMed

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A

    2017-02-11

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  1. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  2. Computational wing optimization and comparisons with experiment for a semi-span wing model

    NASA Technical Reports Server (NTRS)

    Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.

    1978-01-01

    A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.

  3. Automated segmentation of multifocal basal ganglia T2*-weighted MRI hypointensities

    PubMed Central

    Glatz, Andreas; Bastin, Mark E.; Kiker, Alexander J.; Deary, Ian J.; Wardlaw, Joanna M.; Valdés Hernández, Maria C.

    2015-01-01

    Multifocal basal ganglia T2*-weighted (T2*w) hypointensities, which are believed to arise mainly from vascular mineralization, were recently proposed as a novel MRI biomarker for small vessel disease and ageing. These T2*w hypointensities are typically segmented semi-automatically, which is time consuming, associated with a high intra-rater variability and low inter-rater agreement. To address these limitations, we developed a fully automated, unsupervised segmentation method for basal ganglia T2*w hypointensities. This method requires conventional, co-registered T2*w and T1-weighted (T1w) volumes, as well as region-of-interest (ROI) masks for the basal ganglia and adjacent internal capsule generated automatically from T1w MRI. The basal ganglia T2*w hypointensities were then segmented with thresholds derived with an adaptive outlier detection method from respective bivariate T2*w/T1w intensity distributions in each ROI. Artefacts were reduced by filtering connected components in the initial masks based on their standardised T2*w intensity variance. The segmentation method was validated using a custom-built phantom containing mineral deposit models, i.e. gel beads doped with 3 different contrast agents in 7 different concentrations, as well as with MRI data from 98 community-dwelling older subjects in their seventies with a wide range of basal ganglia T2*w hypointensities. The method produced basal ganglia T2*w hypointensity masks that were in substantial volumetric and spatial agreement with those generated by an experienced rater (Jaccard index = 0.62 ± 0.40). These promising results suggest that this method may have use in automatic segmentation of basal ganglia T2*w hypointensities in studies of small vessel disease and ageing. PMID:25451469

  4. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  5. Pass the Popcorn: “Obesogenic” Behaviors and Stigma in Children’s Movies

    PubMed Central

    Throop, Elizabeth M.; Skinner, Asheley Cockrell; Perrin, Andrew J.; Steiner, Michael J.; Odulana, Adebowale; Perrin, Eliana M.

    2014-01-01

    Objective To determine the prevalence of obesity-related behaviors and attitudes in children’s movies. Design and Methods We performed a mixed-methods study of the top-grossing G- and PG-rated movies, 2006–2010 (4 per year). For each 10-minute movie segment the following were assessed: 1) prevalence of key nutrition and physical activity behaviors corresponding to the American Academy of Pediatrics obesity prevention recommendations for families; 2) prevalence of weight stigma; 3) assessment as healthy, unhealthy, or neutral; 3) free-text interpretations of stigma. Results Agreement between coders was greater than 85% (Cohen’s kappa=0.7), good for binary responses. Segments with food depicted: exaggerated portion size (26%); unhealthy snacks (51%); sugar-sweetened beverages (19%). Screen time was also prevalent (40% of movies showed television; 35% computer; 20% video games). Unhealthy segments outnumbered healthy segments 2:1. Most (70%) of the movies included weight-related stigmatizing content (e.g. “That fat butt! Flabby arms! And this ridiculous belly!”). Conclusions These popular children’s movies had significant “obesogenic” content, and most contained weight-based stigma. They present a mixed message to children: promoting unhealthy behaviors while stigmatizing the behaviors’ possible effects. Further research is needed to determine the effects of such messages on children. PMID:24311390

  6. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  7. Reconstructing liver shape and position from MR image slices using an active shape model

    NASA Astrophysics Data System (ADS)

    Fenchel, Matthias; Thesen, Stefan; Schilling, Andreas

    2008-03-01

    We present an algorithm for fully automatic reconstruction of 3D position, orientation and shape of the human liver from a sparsely covering set of n 2D MR slice images. Reconstructing the shape of an organ from slice images can be used for scan planning, for surgical planning or other purposes where 3D anatomical knowledge has to be inferred from sparse slices. The algorithm is based on adapting an active shape model of the liver surface to a given set of slice images. The active shape model is created from a training set of liver segmentations from a group of volunteers. The training set is set up with semi-manual segmentations of T1-weighted volumetric MR images. Searching for the optimal shape model that best fits to the image data is done by maximizing a similarity measure based on local appearance at the surface. Two different algorithms for the active shape model search are proposed and compared: both algorithms seek to maximize the a-posteriori probability of the grey level appearance around the surface while constraining the surface to the space of valid shapes. The first algorithm works by using grey value profile statistics in normal direction. The second algorithm uses average and variance images to calculate the local surface appearance on the fly. Both algorithms are validated by fitting the active shape model to abdominal 2D slice images and comparing the shapes, which have been reconstructed, to the manual segmentations and to the results of active shape model searches from 3D image data. The results turn out to be promising and competitive to active shape model segmentations from 3D data.

  8. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  9. Semi-automated segmentation of a glioblastoma multiforme on brain MR images for radiotherapy planning.

    PubMed

    Hori, Daisuke; Katsuragawa, Shigehiko; Murakami, Ryuuji; Hirai, Toshinori

    2010-04-20

    We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2+/-9.8% and 84.1+/-7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning.

  10. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    PubMed

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  11. Classification of Alzheimer's disease patients with hippocampal shape wrapper-based feature selection and support vector machine

    NASA Astrophysics Data System (ADS)

    Young, Jonathan; Ridgway, Gerard; Leung, Kelvin; Ourselin, Sebastien

    2012-02-01

    It is well known that hippocampal atrophy is a marker of the onset of Alzheimer's disease (AD) and as a result hippocampal volumetry has been used in a number of studies to provide early diagnosis of AD and predict conversion of mild cognitive impairment patients to AD. However, rates of atrophy are not uniform across the hippocampus making shape analysis a potentially more accurate biomarker. This study studies the hippocampi from 226 healthy controls, 148 AD patients and 330 MCI patients obtained from T1 weighted structural MRI images from the ADNI database. The hippocampi are anatomically segmented using the MAPS multi-atlas segmentation method, and the resulting binary images are then processed with SPHARM software to decompose their shapes as a weighted sum of spherical harmonic basis functions. The resulting parameterizations are then used as feature vectors in Support Vector Machine (SVM) classification. A wrapper based feature selection method was used as this considers the utility of features in discriminating classes in combination, fully exploiting the multivariate nature of the data and optimizing the selected set of features for the type of classifier that is used. The leave-one-out cross validated accuracy obtained on training data is 88.6% for classifying AD vs controls and 74% for classifying MCI-converters vs MCI-stable with very compact feature sets, showing that this is a highly promising method. There is currently a considerable fall in accuracy on unseen data indicating that the feature selection is sensitive to the data used, however feature ensemble methods may overcome this.

  12. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  13. Bio-Inspired Sensing and Display of Polarization Imagery

    DTIC Science & Technology

    2005-07-17

    and weighting coefficients in this example. Panel 4D clearly shows a better visibility, feature extraction , and lesser effect from the background...of linear polarization. Panel E represents the segmentation of the degree of linear polarization, and then Panel F shows the extracted segment with...polarization, and Panel F shows the segment extraction with the finger print selected. Panel G illustrates the application of Canny edge detection to

  14. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2009-04-01

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was validated quantitatively by comparing it with the CO values measured from the volume flow in the pulmonary artery. Relative bias varied between 0 and -17%, where the nominal accuracy of the flow meter is in the order of 10%. Assuming the CO measurements from the flow probe as a gold standard, excellent correlation (r = 0.99) was observed with the CO estimates obtained from image segmentation.

  15. Automated segmentation of the actively stained mouse brain using multi-spectral MR microscopy.

    PubMed

    Sharief, Anjum A; Badea, Alexandra; Dale, Anders M; Johnson, G Allan

    2008-01-01

    Magnetic resonance microscopy (MRM) has created new approaches for high-throughput morphological phenotyping of mouse models of diseases. Transgenic and knockout mice serve as a test bed for validating hypotheses that link genotype to the phenotype of diseases, as well as developing and tracking treatments. We describe here a Markov random fields based segmentation of the actively stained mouse brain, as a prerequisite for morphological phenotyping. Active staining achieves higher signal to noise ratio (SNR) thereby enabling higher resolution imaging per unit time than obtained in previous formalin-fixed mouse brain studies. The segmentation algorithm was trained on isotropic 43-mum T1- and T2-weighted MRM images. The mouse brain was segmented into 33 structures, including the hippocampus, amygdala, hypothalamus, thalamus, as well as fiber tracts and ventricles. Probabilistic information used in the segmentation consisted of (a) intensity distributions in the T1- and T2-weighted data, (b) location, and (c) contextual priors for incorporating spatial information. Validation using standard morphometric indices showed excellent consistency between automatically and manually segmented data. The algorithm has been tested on the widely used C57BL/6J strain, as well as on a selection of six recombinant inbred BXD strains, chosen especially for their largely variant hippocampus.

  16. Optimal graph based segmentation using flow lines with application to airway wall segmentation.

    PubMed

    Petersen, Jens; Nielsen, Mads; Lo, Pechin; Saghir, Zaigham; Dirksen, Asger; de Bruijne, Marleen

    2011-01-01

    This paper introduces a novel optimal graph construction method that is applicable to multi-dimensional, multi-surface segmentation problems. Such problems are often solved by refining an initial coarse surface within the space given by graph columns. Conventional columns are not well suited for surfaces with high curvature or complex shapes but the proposed columns, based on properly generated flow lines, which are non-intersecting, guarantee solutions that do not self-intersect and are better able to handle such surfaces. The method is applied to segment human airway walls in computed tomography images. Comparison with manual annotations on 649 cross-sectional images from 15 different subjects shows significantly smaller contour distances and larger area of overlap than are obtained with recently published graph based methods. Airway abnormality measurements obtained with the method on 480 scan pairs from a lung cancer screening trial are reproducible and correlate significantly with lung function.

  17. HIPS: A new hippocampus subfield segmentation method.

    PubMed

    Romero, José E; Coupé, Pierrick; Manjón, José V

    2017-12-01

    The importance of the hippocampus in the study of several neurodegenerative diseases such as Alzheimer's disease makes it a structure of great interest in neuroimaging. However, few segmentation methods have been proposed to measure its subfields due to its complex structure and the lack of high resolution magnetic resonance (MR) data. In this work, we present a new pipeline for automatic hippocampus subfield segmentation using two available hippocampus subfield delineation protocols that can work with both high and standard resolution data. The proposed method is based on multi-atlas label fusion technology that benefits from a novel multi-contrast patch match search process (using high resolution T1-weighted and T2-weighted images). The proposed method also includes as post-processing a new neural network-based error correction step to minimize systematic segmentation errors. The method has been evaluated on both high and standard resolution images and compared to other state-of-the-art methods showing better results in terms of accuracy and execution time. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  19. Dual-energy-based metal segmentation for metal artifact reduction in dental computed tomography.

    PubMed

    Hegazy, Mohamed A A; Eldib, Mohamed Elsayed; Hernandez, Daniel; Cho, Myung Hye; Cho, Min Hyoung; Lee, Soo Yeol

    2018-02-01

    In a dental CT scan, the presence of dental fillings or dental implants generates severe metal artifacts that often compromise readability of the CT images. Many metal artifact reduction (MAR) techniques have been introduced, but dental CT scans still suffer from severe metal artifacts particularly when multiple dental fillings or implants exist around the region of interest. The high attenuation coefficient of teeth often causes erroneous metal segmentation, compromising the MAR performance. We propose a metal segmentation method for a dental CT that is based on dual-energy imaging with a narrow energy gap. Unlike a conventional dual-energy CT, we acquire two projection data sets at two close tube voltages (80 and 90 kV p ), and then, we compute the difference image between the two projection images with an optimized weighting factor so as to maximize the contrast of the metal regions. We reconstruct CT images from the weighted difference image to identify the metal region with global thresholding. We forward project the identified metal region to designate metal trace on the projection image. We substitute the pixel values on the metal trace with the ones computed by the region filling method. The region filling in the metal trace removes high-intensity data made by the metallic objects from the projection image. We reconstruct final CT images from the region-filled projection image with the fusion-based approach. We have done imaging experiments on a dental phantom and a human skull phantom using a lab-built micro-CT and a commercial dental CT system. We have corrected the projection images of a dental phantom and a human skull phantom using the single-energy and dual-energy-based metal segmentation methods. The single-energy-based method often failed in correcting the metal artifacts on the slices on which tooth enamel exists. The dual-energy-based method showed better MAR performances in all cases regardless of the presence of tooth enamel on the slice of interest. We have compared the MAR performances between both methods in terms of the relative error (REL), the sum of squared difference (SSD) and the normalized absolute difference (NAD). For the dental phantom images corrected by the single-energy-based method, the metric values were 95.3%, 94.5%, and 90.6%, respectively, while they were 90.1%, 90.05%, and 86.4%, respectively, for the images corrected by the dual-energy-based method. For the human skull phantom images, the metric values were improved from 95.6%, 91.5%, and 89.6%, respectively, to 88.2%, 82.5%, and 81.3%, respectively. The proposed dual-energy-based method has shown better performance in metal segmentation leading to better MAR performance in dental imaging. We expect the proposed metal segmentation method can be used to improve the MAR performance of existing MAR techniques that have metal segmentation steps in their correction procedures. © 2017 American Association of Physicists in Medicine.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J; Gu, X; Lu, W

    Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less

  1. Automated separation of merged Langerhans islets

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2016-03-01

    This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).

  2. Adult-child differences in acoustic cue weighting are influenced by segmental context: Children are not always perceptually biased toward transitions

    NASA Astrophysics Data System (ADS)

    Mayo, Catherine; Turk, Alice

    2004-06-01

    It has been proposed that young children may have a perceptual preference for transitional cues [Nittrouer, S. (2002). J. Acoust. Soc. Am. 112, 711-719]. According to this proposal, this preference can manifest itself either as heavier weighting of transitional cues by children than by adults, or as heavier weighting of transitional cues than of other, more static, cues by children. This study tested this hypothesis by examining adults' and children's cue weighting for the contrasts /ess,aye,smcapi/-/sh,aye,smcapi/, /de/-/be/, /ta/-/da/, and /ti/-/di/. Children were found to weight transitions more heavily than did adults for the fricative contrast /ess,aye,smcapi/-/sh,aye,smcapi/, and were found to weight transitional cues more heavily than nontransitional cues for the voice-onset-time contrast /ta/-/da/. However, these two patterns of cue weighting were not found to hold for the contrasts /de/-/be/ and /ti/-/di/. Consistent with several studies in the literature, results suggest that children do not always show a bias towards vowel-formant transitions, but that cue weighting can differ according to segmental context, and possibly the physical distinctiveness of available acoustic cues.

  3. Joint segmentation of lumen and outer wall from femoral artery MR images: Towards 3D imaging measurements of peripheral arterial disease.

    PubMed

    Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron

    2015-12-01

    Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Optimization of the Ussing chamber setup with excised rat intestinal segments for dissolution/permeation experiments of poorly soluble drugs.

    PubMed

    Forner, Kristin; Roos, Carl; Dahlgren, David; Kesisoglou, Filippos; Konerding, Moritz A; Mazur, Johanna; Lennernäs, Hans; Langguth, Peter

    2017-02-01

    Prediction of the in vivo absorption of poorly soluble drugs may require simultaneous dissolution/permeation experiments. In vivo predictive media have been modified for permeation experiments with Caco-2 cells, but not for excised rat intestinal segments. The present study aimed at improving the setup of dissolution/permeation experiments with excised rat intestinal segments by assessing suitable donor and receiver media. The regional compatibility of rat intestine in Ussing chambers with modified Fasted and Fed State Simulated Intestinal Fluids (Fa/FeSSIF mod ) as donor media was evaluated via several parameters that reflect the viability of the excised intestinal segments. Receiver media that establish sink conditions were investigated for their foaming potential and toxicity. Dissolution/permeation experiments with the optimized conditions were then tested for two particle sizes of the BCS class II drug aprepitant. Fa/FeSSIF mod were toxic for excised rat ileal sheets but not duodenal sheets, the compatibility with jejunal segments depended on the bile salt concentration. A non-foaming receiver medium containing bovine serum albumin (BSA) and Antifoam B was nontoxic. With these conditions, the permeation of nanosized aprepitant was higher than of the unmilled drug formulations. The compatibility of Fa/FeSSIF mod depends on the excised intestinal region. The chosen conditions enable dissolution/permeation experiments with excised rat duodenal segments. The experiments correctly predicted the superior permeation of nanosized over unmilled aprepitant that is observed in vivo. The optimized setup uses FaSSIF mod as donor medium, excised rat duodenal sheets as permeation membrane and a receiver medium containing BSA and Antifoam B.

  5. Fast CSF MRI for brain segmentation; Cross-validation by comparison with 3D T1-based brain segmentation methods

    PubMed Central

    de Bresser, Jeroen; Hendrikse, Jeroen; Siero, Jeroen C. W.; Petersen, Esben T.; De Vis, Jill B.

    2018-01-01

    Objective In previous work we have developed a fast sequence that focusses on cerebrospinal fluid (CSF) based on the long T2 of CSF. By processing the data obtained with this CSF MRI sequence, brain parenchymal volume (BPV) and intracranial volume (ICV) can be automatically obtained. The aim of this study was to assess the precision of the BPV and ICV measurements of the CSF MRI sequence and to validate the CSF MRI sequence by comparison with 3D T1-based brain segmentation methods. Materials and methods Ten healthy volunteers (2 females; median age 28 years) were scanned (3T MRI) twice with repositioning in between. The scan protocol consisted of a low resolution (LR) CSF sequence (0:57min), a high resolution (HR) CSF sequence (3:21min) and a 3D T1-weighted sequence (6:47min). Data of the HR 3D-T1-weighted images were downsampled to obtain LR T1-weighted images (reconstructed imaging time: 1:59 min). Data of the CSF MRI sequences was automatically segmented using in-house software. The 3D T1-weighted images were segmented using FSL (5.0), SPM12 and FreeSurfer (5.3.0). Results The mean absolute differences for BPV and ICV between the first and second scan for CSF LR (BPV/ICV: 12±9/7±4cc) and CSF HR (5±5/4±2cc) were comparable to FSL HR (9±11/19±23cc), FSL LR (7±4, 6±5cc), FreeSurfer HR (5±3/14±8cc), FreeSurfer LR (9±8, 12±10cc), and SPM HR (5±3/4±7cc), and SPM LR (5±4, 5±3cc). The correlation between the measured volumes of the CSF sequences and that measured by FSL, FreeSurfer and SPM HR and LR was very good (all Pearson’s correlation coefficients >0.83, R2 .67–.97). The results from the downsampled data and the high-resolution data were similar. Conclusion Both CSF MRI sequences have a precision comparable to, and a very good correlation with established 3D T1-based automated segmentations methods for the segmentation of BPV and ICV. However, the short imaging time of the fast CSF MRI sequence is superior to the 3D T1 sequence on which segmentation with established methods is performed. PMID:29672584

  6. Fast CSF MRI for brain segmentation; Cross-validation by comparison with 3D T1-based brain segmentation methods.

    PubMed

    van der Kleij, Lisa A; de Bresser, Jeroen; Hendrikse, Jeroen; Siero, Jeroen C W; Petersen, Esben T; De Vis, Jill B

    2018-01-01

    In previous work we have developed a fast sequence that focusses on cerebrospinal fluid (CSF) based on the long T2 of CSF. By processing the data obtained with this CSF MRI sequence, brain parenchymal volume (BPV) and intracranial volume (ICV) can be automatically obtained. The aim of this study was to assess the precision of the BPV and ICV measurements of the CSF MRI sequence and to validate the CSF MRI sequence by comparison with 3D T1-based brain segmentation methods. Ten healthy volunteers (2 females; median age 28 years) were scanned (3T MRI) twice with repositioning in between. The scan protocol consisted of a low resolution (LR) CSF sequence (0:57min), a high resolution (HR) CSF sequence (3:21min) and a 3D T1-weighted sequence (6:47min). Data of the HR 3D-T1-weighted images were downsampled to obtain LR T1-weighted images (reconstructed imaging time: 1:59 min). Data of the CSF MRI sequences was automatically segmented using in-house software. The 3D T1-weighted images were segmented using FSL (5.0), SPM12 and FreeSurfer (5.3.0). The mean absolute differences for BPV and ICV between the first and second scan for CSF LR (BPV/ICV: 12±9/7±4cc) and CSF HR (5±5/4±2cc) were comparable to FSL HR (9±11/19±23cc), FSL LR (7±4, 6±5cc), FreeSurfer HR (5±3/14±8cc), FreeSurfer LR (9±8, 12±10cc), and SPM HR (5±3/4±7cc), and SPM LR (5±4, 5±3cc). The correlation between the measured volumes of the CSF sequences and that measured by FSL, FreeSurfer and SPM HR and LR was very good (all Pearson's correlation coefficients >0.83, R2 .67-.97). The results from the downsampled data and the high-resolution data were similar. Both CSF MRI sequences have a precision comparable to, and a very good correlation with established 3D T1-based automated segmentations methods for the segmentation of BPV and ICV. However, the short imaging time of the fast CSF MRI sequence is superior to the 3D T1 sequence on which segmentation with established methods is performed.

  7. Spectral Clustering Predicts Tumor Tissue Heterogeneity Using Dynamic 18F-FDG PET: A Complement to the Standard Compartmental Modeling Approach.

    PubMed

    Katiyar, Prateek; Divine, Mathew R; Kohlhofer, Ursula; Quintanilla-Martinez, Leticia; Schölkopf, Bernhard; Pichler, Bernd J; Disselhorst, Jonathan A

    2017-04-01

    In this study, we described and validated an unsupervised segmentation algorithm for the assessment of tumor heterogeneity using dynamic 18 F-FDG PET. The aim of our study was to objectively evaluate the proposed method and make comparisons with compartmental modeling parametric maps and SUV segmentations using simulations of clinically relevant tumor tissue types. Methods: An irreversible 2-tissue-compartmental model was implemented to simulate clinical and preclinical 18 F-FDG PET time-activity curves using population-based arterial input functions (80 clinical and 12 preclinical) and the kinetic parameter values of 3 tumor tissue types. The simulated time-activity curves were corrupted with different levels of noise and used to calculate the tissue-type misclassification errors of spectral clustering (SC), parametric maps, and SUV segmentation. The utility of the inverse noise variance- and Laplacian score-derived frame weighting schemes before SC was also investigated. Finally, the SC scheme with the best results was tested on a dynamic 18 F-FDG measurement of a mouse bearing subcutaneous colon cancer and validated using histology. Results: In the preclinical setup, the inverse noise variance-weighted SC exhibited the lowest misclassification errors (8.09%-28.53%) at all noise levels in contrast to the Laplacian score-weighted SC (16.12%-31.23%), unweighted SC (25.73%-40.03%), parametric maps (28.02%-61.45%), and SUV (45.49%-45.63%) segmentation. The classification efficacy of both weighted SC schemes in the clinical case was comparable to the unweighted SC. When applied to the dynamic 18 F-FDG measurement of colon cancer, the proposed algorithm accurately identified densely vascularized regions from the rest of the tumor. In addition, the segmented regions and clusterwise average time-activity curves showed excellent correlation with the tumor histology. Conclusion: The promising results of SC mark its position as a robust tool for quantification of tumor heterogeneity using dynamic PET studies. Because SC tumor segmentation is based on the intrinsic structure of the underlying data, it can be easily applied to other cancer types as well. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  8. Design and fabrication of a boron reinforced intertank skirt

    NASA Technical Reports Server (NTRS)

    Henshaw, J.; Roy, P. A.; Pylypetz, P.

    1974-01-01

    Analytical and experimental studies were performed to evaluate the structural efficiency of a boron reinforced shell, where the medium of reinforcement consists of hollow aluminum extrusions infiltrated with boron epoxy. Studies were completed for the design of a one-half scale minimum weight shell using boron reinforced stringers and boron reinforced rings. Parametric and iterative studies were completed for the design of minimum weight stringers, rings, shells without rings and shells with rings. Computer studies were completed for the final evaluation of a minimum weight shell using highly buckled minimum gage skin. The detail design is described of a practical minimum weight test shell which demonstrates a weight savings of 30% as compared to an all aluminum longitudinal stiffened shell. Sub-element tests were conducted on representative segments of the compression surface at maximum stress and also on segments of the load transfer joint. A 10 foot long, 77 inch diameter shell was fabricated from the design and delivered for further testing.

  9. Autonomous Unmanned Aerial Vehicle Rendezvous for Automated Aerial Refueling

    DTIC Science & Technology

    2007-03-01

    represents a straight line segment. It can be seen that there are ten possible combinations of arcs and line segments (RSR, RSL, LSR, LSL, LRL, RLR , SLR...SRL, RLS, and LRS). However, L. E. Dubins proved that only these six sequences are possibly optimal: RSR, RSL, LSR, LSL, LRL, and RLR [Dubins 1957...From Figure 2-5 and Figure 2-6, it can be seen that the last two cases, RLR and LRL can only be optimal when the initial point and the terminal

  10. Designing of skull defect implants using C1 rational cubic Bezier and offset curves

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Majid, Ahmad Abd; Piah, Abd Rahni Mt; Rajion, Zainul Ahmad

    2015-05-01

    Some of the reasons to construct skull implant are due to head trauma after an accident or an injury or an infection or because of tumor invasion or when autogenous bone is not suitable for replacement after a decompressive craniectomy (DC). The main objective of our study is to develop a simple method to redesign missing parts of the skull. The procedure begins with segmentation, data approximation, and estimation process of the outer wall by a C1 continuous curve. Its offset curve is used to generate the inner wall. A metaheuristic algorithm, called harmony search (HS) is a derivative-free real parameter optimization algorithm inspired from the musical improvisation process of searching for a perfect state of harmony. In this study, data approximation by a rational cubic Bézier function uses HS to optimize position of middle points and value of the weights. All the phases contribute significantly in making our proposed technique automated. Graphical examples of several postoperative skulls are displayed to show the effectiveness of our proposed method.

  11. Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.

    PubMed

    Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane

    2017-07-01

    This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.

  12. Preservation or Restoration of Segmental and Regional Spinal Lordosis Using Minimally Invasive Interbody Fusion Techniques in Degenerative Lumbar Conditions: A Literature Review.

    PubMed

    Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A

    2016-04-01

    A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P < 0.001). Segmental lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P < 0.001) in 1182 patient from 24 study cohorts. Simple linear regression revealed a significant relationship between preoperative lumbar lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained (preserving vs. restoring alignment). 4.

  13. White matter lesion extension to automatic brain tissue segmentation on MRI.

    PubMed

    de Boer, Renske; Vrooman, Henri A; van der Lijn, Fedde; Vernooij, Meike W; Ikram, M Arfan; van der Lugt, Aad; Breteler, Monique M B; Niessen, Wiro J

    2009-05-01

    A fully automated brain tissue segmentation method is optimized and extended with white matter lesion segmentation. Cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) are segmented by an atlas-based k-nearest neighbor classifier on multi-modal magnetic resonance imaging data. This classifier is trained by registering brain atlases to the subject. The resulting GM segmentation is used to automatically find a white matter lesion (WML) threshold in a fluid-attenuated inversion recovery scan. False positive lesions are removed by ensuring that the lesions are within the white matter. The method was visually validated on a set of 209 subjects. No segmentation errors were found in 98% of the brain tissue segmentations and 97% of the WML segmentations. A quantitative evaluation using manual segmentations was performed on a subset of 6 subjects for CSF, GM and WM segmentation and an additional 14 for the WML segmentations. The results indicated that the automatic segmentation accuracy is close to the interobserver variability of manual segmentations.

  14. Delineation and geometric modeling of road networks

    NASA Astrophysics Data System (ADS)

    Poullis, Charalambos; You, Suya

    In this work we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and transforming them to their polygonal representations.

  15. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  16. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  17. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  18. Intensity inhomogeneity compensation and tissue segmentation for magnetic resonance imaging with noise-suppressed multiplicative intrinsic component optimization

    NASA Astrophysics Data System (ADS)

    Dong, Huaipeng; Zhang, Qi; Shi, Jun

    2017-12-01

    Magnetic resonance (MR) images suffer from intensity inhomogeneity. Segmentation-based approaches can simultaneously achieve both intensity inhomogeneity compensation (IIC) and tissue segmentation for MR images with little noise, but they often fail for images polluted by severe noise. Here, we propose a noise-robust algorithm named noise-suppressed multiplicative intrinsic component optimization (NSMICO) for simultaneous IIC and tissue segmentation. Considering the spatial characteristics in an image, an adaptive nonlocal means filtering term is incorporated into the objective function of NSMICO to decrease image deterioration due to noise. Then, a fuzzy local factor term utilizing the spatial and gray-level relationship among local pixels is embedded into the objective function to reach a balance between noise suppression and detail preservation. Experimental results on synthetic natural and MR images with various levels of intensity inhomogeneity and noise, as well as in vivo clinical MR images, have demonstrated the effectiveness of the NSMICO and its superiority to three competing approaches. The NSMICO could be potentially valuable for MR image IIC and tissue segmentation.

  19. Extended capture range for focus-diverse phase retrieval in segmented aperture systems using geometrical optics.

    PubMed

    Jurling, Alden S; Fienup, James R

    2014-03-01

    Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.

  20. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  1. Jansen-MIDAS: A multi-level photomicrograph segmentation software based on isotropic undecimated wavelets.

    PubMed

    de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo

    2018-01-01

    Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.

  2. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  3. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  4. Evaluation of the predictive capacity of vertical segmental tetrapolar bioimpedance for excess weight detection in adolescents.

    PubMed

    Neves, Felipe Silva; Leandro, Danielle Aparecida Barbosa; Silva, Fabiana Almeida da; Netto, Michele Pereira; Oliveira, Renata Maria Souza; Cândido, Ana Paula Carlos

    2015-01-01

    To analyze the predictive capacity of the vertical segmental tetrapolar bioimpedance apparatus in the detection of excess weight in adolescents, using tetrapolar bioelectrical impedance as a reference. This was a cross-sectional study conducted with 411 students aged between 10 and 14 years, of both genders, enrolled in public and private schools, selected by a simple and stratified random sampling process according to the gender, age, and proportion in each institution. The sample was evaluated by the anthropometric method and underwent a body composition analysis using vertical bipolar, horizontal tetrapolar, and vertical segmental tetrapolar assessment. The ROC curve was constructed based on calculations of sensitivity and specificity for each point of the different possible measurements of body fat. The statistical analysis used Student's t-test, Pearson's correlation coefficient, and McNemar's chi-squared test. Subsequently, the variables were interpreted using SPSS software, version 17.0. Of the total sample, 53.7% were girls and 46.3%, boys. Of the total, 20% and 12.5% had overweight and obesity, respectively. The body segment measurement charts showed high values of sensitivity and specificity and high areas under the ROC curve, ranging from 0.83 to 0.95 for girls and 0.92 to 0.98 for boys, suggesting a slightly higher performance for the male gender. Body fat percentage was the most efficient criterion to detect overweight, while the trunk segmental fat was the least accurate indicator. The apparatus demonstrated good performance to predict excess weight. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  5. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline

    PubMed Central

    Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin

    2014-01-01

    Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717

  6. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less

  7. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  8. Multi-organ segmentation from multi-phase abdominal CT via 4D graphs using enhancement, shape and location optimization.

    PubMed

    Linguraru, Marius George; Pura, John A; Chowdhury, Ananda S; Summers, Ronald M

    2010-01-01

    The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis (CAD) applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D erosion using population historic information of contrast-enhanced liver, spleen, and kidneys was applied to multi-phase data to initialize the 4D graph and adapt to patient specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance and enhancement, and shape and location on organ segmentation.

  9. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  10. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  11. Singular-Arc Time-Optimal Trajectory of Aircraft in Two-Dimensional Wind Field

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents a study of a minimum time-to-climb trajectory analysis for aircraft flying in a two-dimensional altitude dependent wind field. The time optimal control problem possesses a singular control structure when the lift coefficient is taken as a control variable. A singular arc analysis is performed to obtain an optimal control solution on the singular arc. Using a time-scale separation with the flight path angle treated as a fast state, the dimensionality of the optimal control solution is reduced by eliminating the lift coefficient control. A further singular arc analysis is used to decompose the original optimal control solution into the flight path angle solution and a trajectory solution as a function of the airspeed and altitude. The optimal control solutions for the initial and final climb segments are computed using a shooting method with known starting values on the singular arc The numerical results of the shooting method show that the optimal flight path angle on the initial and final climb segments are constant. The analytical approach provides a rapid means for analyzing a time optimal trajectory for aircraft performance.

  12. Slumping monitoring of glass and silicone foils for x-ray space telescopes

    NASA Astrophysics Data System (ADS)

    Mika, M.; Pina, L.; Landova, M.; Sveda, L.; Havlikova, R.; Semencova, V.; Hudec, R.; Inneman, A.

    2011-09-01

    We developed a non-contact method for in-situ monitoring of the thermal slumping of glass and silicone foils to optimize this technology for the production of high quality mirrors for large aperture x-ray space telescopes. The telescope's crucial part is a high throughput, heavily nested mirror array with the angular resolution better than 5 arcsec. Its construction requires precise and light-weight segmented optics with surface micro-roughness on the order of 0.1 nm. Promising materials are glass or silicon foils shaped by thermal forming. The desired parameters can be achieved only through optimizing the slumping process. We monitored the slumping by taking the snapshots of the shapes every five minutes at constant temperature and the final shapes we measured with the Taylor Hobson profilometer. The shapes were parabolic and the deviations from a circle had the peak-to-valley values of 20-30 μm. The observed hot plastic deformation of the foils was controlled by viscous flow. We calculated and plotted the relations between the middle part deflection, viscosity, and heat-treatment time. These relations have been utilized for the development of a numerical model enabling computer simulation. By the simulation, we verify the material's properties and generate new data for the thorough optimization of the slumping process.

  13. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  14. Hybrid region merging method for segmentation of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo

    2014-12-01

    Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.

  15. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  16. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  17. Automatic, accurate, and reproducible segmentation of the brain and cerebro-spinal fluid in T1-weighted volume MRI scans and its application to serial cerebral and intracranial volumetry

    NASA Astrophysics Data System (ADS)

    Lemieux, Louis

    2001-07-01

    A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.

  18. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  19. A musculoskeletal foot model for clinical gait analysis.

    PubMed

    Saraswat, Prabhav; Andersen, Michael S; Macwilliams, Bruce A

    2010-06-18

    Several full body musculoskeletal models have been developed for research applications and these models may potentially be developed into useful clinical tools to assess gait pathologies. Existing full-body musculoskeletal models treat the foot as a single segment and ignore the motions of the intrinsic joints of the foot. This assumption limits the use of such models in clinical cases with significant foot deformities. Therefore, a three-segment musculoskeletal model of the foot was developed to match the segmentation of a recently developed multi-segment kinematic foot model. All the muscles and ligaments of the foot spanning the modeled joints were included. Muscle pathways were adjusted with an optimization routine to minimize the difference between the muscle flexion-extension moment arms from the model and moment arms reported in literature. The model was driven by walking data from five normal pediatric subjects (aged 10.6+/-1.57 years) and muscle forces and activation levels required to produce joint motions were calculated using an inverse dynamic analysis approach. Due to the close proximity of markers on the foot, small marker placement error during motion data collection may lead to significant differences in musculoskeletal model outcomes. Therefore, an optimization routine was developed to enforce joint constraints, optimally scale each segment length and adjust marker positions. To evaluate the model outcomes, the muscle activation patterns during walking were compared with electromyography (EMG) activation patterns reported in the literature. Model-generated muscle activation patterns were observed to be similar to the EMG activation patterns. Published by Elsevier Ltd.

  20. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  1. An Objective Approach to Determining the Weight Ranges of Prey Preferred by and Accessible to the Five Large African Carnivores

    PubMed Central

    Clements, Hayley S.; Tambling, Craig J.; Hayward, Matt W.; Kerley, Graham I. H.

    2014-01-01

    Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points) with high rates of detection (75% to 100% of simulations, depending on number of break-points) and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point). When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed “accessible prey”. Accessible prey weight ranges were found to be 14–135 kg for cheetah Acinonyx jubatus, 1–45 kg for leopard Panthera pardus, 32–632 kg for lion Panthera leo, 15–1600 kg for spotted hyaena Crocuta crocuta and 10–289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah), 82±3% (leopard), 81±2% (lion), 97±2% (spotted hyaena) and 96±2% (wild dog) of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore species. PMID:24988433

  2. Diffraction on heavy samples at STRESS-SPEC using a robot system

    NASA Astrophysics Data System (ADS)

    Al-Hamdany, N.; Gan, W. M.; Randau, C.; Brokmeier, H.-G.; Hofmann, M.

    2015-04-01

    The material science diffractometer STRESS-SPEC has high flux and a high flexible monochromator arrangement to optimize the needed wavelength. Many specific sample handling stages and sample environments are available. One of them is a Staubli RX 160 robot with nominal load capacity of 20 kg and more freedom for texture mapping than the Huber 512 Eulerian type cradle. Demonstration experiments of non-destructive pole figures and strain measurements of Cu-tube segments weighing 12 kg weight and 250 mm in length and 140 mm diameter have been carried out. The residual strains measured by the robot and by the XYZ- stage fit quite well, that means the robot is reliable for strain measurements. The texture of the Cu-tube has dominant recrystallization texture components represented by the cube and the rotated cube.

  3. Optimal frame-by-frame result combination strategy for OCR in video stream

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy

    2018-04-01

    This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.

  4. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    PubMed

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  5. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers

    PubMed Central

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-01-01

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the “all parallel” shielding coils with a 45° starting position have the best shielding performance, whereas the “separated loop” shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same. PMID:28587137

  6. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.

    PubMed

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-05-26

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.

  7. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  8. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  9. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Optimal linear reconstruction of dark matter from halo catalogues

    DOE PAGES

    Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.

    2011-04-01

    The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less

  11. Anthropometric Relationships of Body and Body Segment Moments of Inertia

    DTIC Science & Technology

    1980-12-01

    platform balanced on a knife edge ( Borelli , 1679). While weight, volume, center of mass and moments of inertia of the whole body can be measured in a...Segments, Paper No. 720964, SAE Transactions, Vol. 81, Sect. 4, pp. 2818-2833. Borelli , J. A., 1679, De Motu Animalium. Lugduni Batavorum. Braune, W

  12. Segmentation of mouse dynamic PET images using a multiphase level set method

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2010-11-01

    Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.

  13. KSC-08pd3245

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - Workers lift the Ares IX upper stage segments’ ballast assemblies off a truck in high bay 4 of the Vehicle Assembly Building at NASA’s Kennedy Space Center, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  14. KSC-08pd3247

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - Workers position Ares IX upper stage segments’ ballast assemblies along the floor of high bay 4 in the Vehicle Assembly Building at NASA’s Kennedy Space Center, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  15. KSC-08pd3243

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - One of five trucks transporting the Ares IX upper stage segments’ ballast assemblies arrives at the Vehicle Assembly Building at NASA’s Kennedy Space, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  16. KSC-08pd3244

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - The Ares IX upper stage segments’ ballast assemblies are offloaded from one of five trucks which delivered them to the Vehicle Assembly Building at NASA’s Kennedy Space Center, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  17. KSC-08pd3246

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - Workers lower an Ares IX upper stage segments’ ballast assembly onto the floor of high bay 4 in the Vehicle Assembly Building at NASA’s Kennedy Space Center, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  18. KSC-08pd3249

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - The Ares IX upper stage segments’ ballast assemblies have arrived at NASA’s Kennedy Space Center and are positioned along the floor of high bay 4 in the Vehicle Assembly Building, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  19. KSC-08pd3248

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - Ares IX upper stage segments’ ballast assemblies are positioned along the floor of high bay 4 in the Vehicle Assembly Building at NASA’s Kennedy Space Center, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  20. KSC-08pd3250

    NASA Image and Video Library

    2008-10-17

    CAPE CANAVERAL, Fla. - The Ares IX upper stage segments’ ballast assemblies have arrived at NASA’s Kennedy Space Center and are positioned along the floor of high bay 4 in the Vehicle Assembly Building, part of the preparations for the test of the Ares IX rocket. These ballast assemblies will be installed in the upper stage 1 and 7 segments and will mimic the mass of the fuel. Their total weight is approximately 160,000 pounds. The test launch of the Ares IX in 2009 will be the first designed to determine the flight-worthiness of the Ares I rocket. Ares I is an in-line, two-stage rocket that will transport the Orion crew exploration vehicle to low-Earth orbit. The Ares I first stage will be a five-segment solid rocket booster based on the four-segment design used for the space shuttle. Ares I’s fifth booster segment allows the launch vehicle to lift more weight and reach a higher altitude before the first stage separates from the upper stage, which ignites in midflight to propel the Orion spacecraft to Earth orbit. Photo credit: NASA/Kim Shiflett

  1. Quantitative mouse brain phenotyping based on single and multispectral MR protocols

    PubMed Central

    Badea, Alexandra; Gewalt, Sally; Avants, Brian B.; Cook, James J.; Johnson, G. Allan

    2013-01-01

    Sophisticated image analysis methods have been developed for the human brain, but such tools still need to be adapted and optimized for quantitative small animal imaging. We propose a framework for quantitative anatomical phenotyping in mouse models of neurological and psychiatric conditions. The framework encompasses an atlas space, image acquisition protocols, and software tools to register images into this space. We show that a suite of segmentation tools (Avants, Epstein et al., 2008) designed for human neuroimaging can be incorporated into a pipeline for segmenting mouse brain images acquired with multispectral magnetic resonance imaging (MR) protocols. We present a flexible approach for segmenting such hyperimages, optimizing registration, and identifying optimal combinations of image channels for particular structures. Brain imaging with T1, T2* and T2 contrasts yielded accuracy in the range of 83% for hippocampus and caudate putamen (Hc and CPu), but only 54% in white matter tracts, and 44% for the ventricles. The addition of diffusion tensor parameter images improved accuracy for large gray matter structures (by >5%), white matter (10%), and ventricles (15%). The use of Markov random field segmentation further improved overall accuracy in the C57BL/6 strain by 6%; so Dice coefficients for Hc and CPu reached 93%, for white matter 79%, for ventricles 68%, and for substantia nigra 80%. We demonstrate the segmentation pipeline for the widely used C57BL/6 strain, and two test strains (BXD29, APP/TTA). This approach appears promising for characterizing temporal changes in mouse models of human neurological and psychiatric conditions, and may provide anatomical constraints for other preclinical imaging, e.g. fMRI and molecular imaging. This is the first demonstration that multiple MR imaging modalities combined with multivariate segmentation methods lead to significant improvements in anatomical segmentation in the mouse brain. PMID:22836174

  2. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  3. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  4. Utility of Diffusion-Weighted MRI to Detect Changes in Liver Diffusion in Benign and Malignant Distal Bile Duct Obstruction: The Influence of Choice of b-Values.

    PubMed

    Karan, Belgin; Erbay, Gurcan; Koc, Zafer; Pourbagher, Aysin; Yildirim, Sedat; Agildere, Ahmet Muhtesem

    2016-11-01

    The study sought to evaluate the potential of diffusion-weighted magnetic resonance imaging to detect changes in liver diffusion in benign and malignant distal bile duct obstruction and to investigate the effect of the choice of b-values on apparent diffusion coefficient (ADC). Diffusion-weighted imaging was acquired with b-values of 200, 600, 800, and 1000 s/mm 2 . ADC values were obtained in 4 segments of the liver. The mean ADC values of 16 patients with malignant distal bile duct obstruction, 14 patients with benign distal bile duct obstruction, and a control group of 16 healthy patients were compared. Mean ADC values for 4 liver segments were lower in the malignant obstruction group than in the benign obstruction and control groups using b = 200 s/mm 2 (P < .05). Mean ADC values of the left lobe medial and lateral segments were lower in the malignant obstruction group than in the benign obstructive and control groups using b = 600 s/mm 2 (P < .05). Mean ADC values of the right lobe posterior segment were lower in the malignant and benign obstruction groups than in the control group using b = 1000 s/mm 2 (P < .05). Using b = 800 s/mm 2 , ADC values of all 4 liver segments in each group were not significantly different (P > .05). There were no correlations between the ADC values of liver segments and liver function tests. Measurement of ADC shows good potential for detecting changes in liver diffusion in patients with distal bile duct obstruction. Calculated ADC values were affected by the choice of b-values. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C

    2017-07-01

    To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.

  6. Analysis and Verification of HET 1 m Mirror Deflections Due to Edge Sensor Loading

    NASA Technical Reports Server (NTRS)

    Stallcup, Michael A.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    The ninety-one 1 m mirror segments which comprise the McDonald Observatory Hobby Eberly Telescope (HET) primary mirror have been observed to drift out of alignment in an unpredictable manner in response to time variant temperature deviations. A Segment Alignment Maintenance System (SAMS) is being developed to detect and correct this segment-to-segment drift using sensors mounted at the edges of the mirror segments. However, the segments were not originally designed to carry the weight of edge sensors. Thus, analyses and tests were conducted as part of the SAMS design to estimate the magnitude and shape of the edge sensor induced deformations as well as the resultant optical performance. Interferometric testing of a 26 m radius of curvature HET mirror segment was performed at the Marshall Space Flight Center using several load conditions to verify the finite element analyses.

  7. Segmentation of breast ultrasound images based on active contours using neutrosophic theory.

    PubMed

    Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A

    2018-04-01

    Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.

  8. Reproducible segmentation of white matter hyperintensities using a new statistical definition.

    PubMed

    Damangir, Soheil; Westman, Eric; Simmons, Andrew; Vrenken, Hugo; Wahlund, Lars-Olof; Spulber, Gabriela

    2017-06-01

    We present a method based on a proposed statistical definition of white matter hyperintensities (WMH), which can work with any combination of conventional magnetic resonance (MR) sequences without depending on manually delineated samples. T1-weighted, T2-weighted, FLAIR, and PD sequences acquired at 1.5 Tesla from 119 subjects from the Kings Health Partners-Dementia Case Register (healthy controls, mild cognitive impairment, Alzheimer's disease) were used. The segmentation was performed using a proposed definition for WMH based on the one-tailed Kolmogorov-Smirnov test. The presented method was verified, given all possible combinations of input sequences, against manual segmentations and a high similarity (Dice 0.85-0.91) was observed. Comparing segmentations with different input sequences to one another also yielded a high similarity (Dice 0.83-0.94) that exceeded intra-rater similarity (Dice 0.75-0.91). We compared the results with those of other available methods and showed that the segmentation based on the proposed definition has better accuracy and reproducibility in the test dataset used. Overall, the presented definition is shown to produce accurate results with higher reproducibility than manual delineation. This approach can be an alternative to other manual or automatic methods not only because of its accuracy, but also due to its good reproducibility.

  9. Synthesis and characterization of segmented poly(esterurethane urea) elastomers for bone tissue engineering

    PubMed Central

    Kavlock, Katherine D.; Pechar, Todd W.; Hollinger, Jeffrey O.; Guelcher, Scott A.; Goldstein, Aaron S.

    2007-01-01

    Segmented polyurethanes have been used extensively in implantable medical devices, but their tunable mechanical properties make them attractive for examining the effect of biomaterial modulus on engineered musculoskeletal tissue development. In this study a family of segmented degradable poly(esterurethane urea)s (PEUURs) were synthesized from 1,4-diisocyanatobutane, a poly(ε-caprolactone) (PCL) macrodiol soft segment and a tyramine-1,4-diisocyanatobutane-tyramine chain extender. By systematically increasing the PCL macrodiol molecular weight from 1100 to 2700 Da, the storage modulus, crystallinity and melting point of the PCL segment were systematically varied. In particular, the melting temperature, Tm, increased from 21 to 61°C and the storage modulus at 37°C increased from 52 to 278 MPa with increasing PCL macrodiol molecular weight, suggesting that the crystallinity of the PCL macrodiol contributed significantly to the mechanical properties of the polymers. Bone marrow stromal cells were cultured on rigid polymer films under osteogenic conditions for up to 14 days. Cell density, alkaline phosphatase activity, and osteopontin and osteocalcin expression were similar among PEUURs and comparable to poly(D,L-lactic-coglycolic acid). This study demonstrates the suitability of this family of PEUURs for tissue engineering applications, and establishes a foundation for determining the effect of biomaterial modulus on bone tissue development. PMID:17418651

  10. The algorithm study for using the back propagation neural network in CT image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi

    2017-01-01

    Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.

  11. Agrobacterium-mediated transformation of Mexican lime (Citrus aurantifolia Swingle) using optimized systems for epicotyls and cotelydons

    USDA-ARS?s Scientific Manuscript database

    Epicotyl and internodal stem segments provide the predominantly used explants for regeneration of transgenic citrus plants following co-cultivation with Agrobacterium. Previous reports using epicotyls segments from Mexican lime have shown low affinity for Agrobacterium tumefaciens infection which re...

  12. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  13. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  14. Correlation-based discrimination between cardiac tissue and blood for segmentation of 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Saris, Anne E. C. M.; Nillesen, Maartje M.; Lopata, Richard G. P.; de Korte, Chris L.

    2013-03-01

    Automated segmentation of 3D echocardiographic images in patients with congenital heart disease is challenging, because the boundary between blood and cardiac tissue is poorly defined in some regions. Cardiologists mentally incorporate movement of the heart, using temporal coherence of structures to resolve ambiguities. Therefore, we investigated the merit of temporal cross-correlation for automated segmentation over the entire cardiac cycle. Optimal settings for maximum cross-correlation (MCC) calculation, based on a 3D cross-correlation based displacement estimation algorithm, were determined to obtain the best contrast between blood and myocardial tissue over the entire cardiac cycle. Resulting envelope-based as well as RF-based MCC values were used as additional external force in a deformable model approach, to segment the left-ventricular cavity in entire systolic phase. MCC values were tested against, and combined with, adaptive filtered, demodulated RF-data. Segmentation results were compared with manually segmented volumes using a 3D Dice Similarity Index (3DSI). Results in 3D pediatric echocardiographic images sequences (n = 4) demonstrate that incorporation of temporal information improves segmentation. The use of MCC values, either alone or in combination with adaptive filtered, demodulated RF-data, resulted in an increase of the 3DSI in 75% of the cases (average 3DSI increase: 0.71 to 0.82). Results might be further improved by optimizing MCC-contrast locally, in regions with low blood-tissue contrast. Reducing underestimation of the endocardial volume due to MCC processing scheme (choice of window size) and consequential border-misalignment, could also lead to more accurate segmentations. Furthermore, increasing the frame rate will also increase MCC-contrast and thus improve segmentation.

  15. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.

  16. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  17. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  18. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  19. Segmentation of a Vibro-Shock Cantilever-Type Piezoelectric Energy Harvester Operating in Higher Transverse Vibration Modes

    PubMed Central

    Zizys, Darius; Gaidys, Rimvydas; Dauksevicius, Rolanas; Ostasevicius, Vytautas; Daniulaitis, Vytautas

    2015-01-01

    The piezoelectric transduction mechanism is a common vibration-to-electric energy harvesting approach. Piezoelectric energy harvesters are typically mounted on a vibrating host structure, whereby alternating voltage output is generated by a dynamic strain field. A design target in this case is to match the natural frequency of the harvester to the ambient excitation frequency for the device to operate in resonance mode, thus significantly increasing vibration amplitudes and, as a result, energy output. Other fundamental vibration modes have strain nodes, where the dynamic strain field changes sign in the direction of the cantilever length. The paper reports on a dimensionless numerical transient analysis of a cantilever of a constant cross-section and an optimally-shaped cantilever with the objective to accurately predict the position of a strain node. Total effective strain produced by both cantilevers segmented at the strain node is calculated via transient analysis and compared to the strain output produced by the cantilevers segmented at strain nodes obtained from modal analysis, demonstrating a 7% increase in energy output. Theoretical results were experimentally verified by using open-circuit voltage values measured for the cantilevers segmented at optimal and suboptimal segmentation lines. PMID:26703623

  20. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions

    PubMed Central

    Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742

  1. Segmentation of a Vibro-Shock Cantilever-Type Piezoelectric Energy Harvester Operating in Higher Transverse Vibration Modes.

    PubMed

    Zizys, Darius; Gaidys, Rimvydas; Dauksevicius, Rolanas; Ostasevicius, Vytautas; Daniulaitis, Vytautas

    2015-12-23

    The piezoelectric transduction mechanism is a common vibration-to-electric energy harvesting approach. Piezoelectric energy harvesters are typically mounted on a vibrating host structure, whereby alternating voltage output is generated by a dynamic strain field. A design target in this case is to match the natural frequency of the harvester to the ambient excitation frequency for the device to operate in resonance mode, thus significantly increasing vibration amplitudes and, as a result, energy output. Other fundamental vibration modes have strain nodes, where the dynamic strain field changes sign in the direction of the cantilever length. The paper reports on a dimensionless numerical transient analysis of a cantilever of a constant cross-section and an optimally-shaped cantilever with the objective to accurately predict the position of a strain node. Total effective strain produced by both cantilevers segmented at the strain node is calculated via transient analysis and compared to the strain output produced by the cantilevers segmented at strain nodes obtained from modal analysis, demonstrating a 7% increase in energy output. Theoretical results were experimentally verified by using open-circuit voltage values measured for the cantilevers segmented at optimal and suboptimal segmentation lines.

  2. Graph-Cut Methods for Grain Boundary Segmentation (Preprint)

    DTIC Science & Technology

    2011-06-01

    metals and metal alloys ) are among the strongest determinants of many material properties, such as mechanical strength or fracture resistance. In materials...cropped) Ni-based alloy image (a) using normalized cut (b) and ratio cut (c). Similar to normalized cut is the average-cut approach [11], where the...framework [2]. (a) (b) (c) Figure 3: Segmentation of a (cropped) Ni-based alloy image by optimal labeling. (a) Segmented grain bound- aries in a template

  3. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  4. Adopting epidemic model to optimize medication and surgical intervention of excess weight

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2017-01-01

    We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.

  5. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing.

    PubMed

    Hsieh, Thomas M; Liu, Yi-Min; Liao, Chun-Chih; Xiao, Furen; Chiang, I-Jen; Wong, Jau-Min

    2011-08-26

    In recent years, magnetic resonance imaging (MRI) has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images.This paper uses an algorithm integrating fuzzy-c-mean (FCM) and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT) on a pixel level. Overall data were then evaluated using a quantified system. The quantified parameters, including the "percent match" (PM) and "correlation ratio" (CR), suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain.Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Results indicated that, even when using only two sets of non-contrasted MR images, the system is a reliable and efficient method of brain-tumor detection. With further development the system demonstrates high potential for practical clinical use.

  6. WE-G-204-07: Automated Characterization of Perceptual Quality of Clinical Chest Radiographs: Improvements in Lung, Spine, and Hardware Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, J; Zhang, L; Samei, E

    Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less

  7. Parameterization of Shape and Compactness in Object-based Image Classification Using Quickbird-2 Imagery

    NASA Astrophysics Data System (ADS)

    Tonbul, H.; Kavzoglu, T.

    2016-12-01

    In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.

  8. Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach

    DTIC Science & Technology

    2012-10-10

    IrwIn D. OlIn Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Sotera Defense Solutions, Inc...2012 Formal Report Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Irwin D. Olin* Naval...Manuscript approved June 30, 2012. 1 FLAT-TOP SECTOR BEAMS USING ONLY ARRAY ELEMENT PHASE WEIGHTING: A METAHEURISTIC

  9. Nanostructure and Dynamics of Ionic and Non-Ionic PEO-Containing Polyureas

    NASA Astrophysics Data System (ADS)

    Chuayprakong, Sunanta; Runt, James

    2013-03-01

    A series of polyethylene oxide (PEO) - based diamines with molecular weights ranging from 250 - 6000 g/mol were polymerized in solution with 4,4'-methylene diphenyl diisocyanate (MDI). In addition, PEO soft segment diamines where modified to incorporate ionomeric species and also polymerized with MDI. The role of PEO soft segment molecular weight and the presence of ionic species on nanoscale segregation and cation conductivity were explored. The former was investigated using small-angle X-ray scattering and atomic force microscopy. Dielectric relaxation spectroscopy was used to investigate polymer and ion dynamics. Local environment and hydrogen bonding were identified by using FTIR spectroscopy.

  10. Optimal Symmetric Multimodal Templates and Concatenated Random Forests for Supervised Brain Tumor Segmentation (Simplified) with ANTsR.

    PubMed

    Tustison, Nicholas J; Shrinidhi, K L; Wintermark, Max; Durst, Christopher R; Kandel, Benjamin M; Gee, James C; Grossman, Murray C; Avants, Brian B

    2015-04-01

    Segmenting and quantifying gliomas from MRI is an important task for diagnosis, planning intervention, and for tracking tumor changes over time. However, this task is complicated by the lack of prior knowledge concerning tumor location, spatial extent, shape, possible displacement of normal tissue, and intensity signature. To accommodate such complications, we introduce a framework for supervised segmentation based on multiple modality intensity, geometry, and asymmetry feature sets. These features drive a supervised whole-brain and tumor segmentation approach based on random forest-derived probabilities. The asymmetry-related features (based on optimal symmetric multimodal templates) demonstrate excellent discriminative properties within this framework. We also gain performance by generating probability maps from random forest models and using these maps for a refining Markov random field regularized probabilistic segmentation. This strategy allows us to interface the supervised learning capabilities of the random forest model with regularized probabilistic segmentation using the recently developed ANTsR package--a comprehensive statistical and visualization interface between the popular Advanced Normalization Tools (ANTs) and the R statistical project. The reported algorithmic framework was the top-performing entry in the MICCAI 2013 Multimodal Brain Tumor Segmentation challenge. The challenge data were widely varying consisting of both high-grade and low-grade glioma tumor four-modality MRI from five different institutions. Average Dice overlap measures for the final algorithmic assessment were 0.87, 0.78, and 0.74 for "complete", "core", and "enhanced" tumor components, respectively.

  11. Patterns of Emphysema Heterogeneity

    PubMed Central

    Valipour, Arschang; Shah, Pallav L.; Gesierich, Wolfgang; Eberhardt, Ralf; Snell, Greg; Strange, Charlie; Barry, Robert; Gupta, Avina; Henne, Erik; Bandyopadhyay, Sourish; Raffy, Philippe; Yin, Youbing; Tschirren, Juerg; Herth, Felix J.F.

    2016-01-01

    Background Although lobar patterns of emphysema heterogeneity are indicative of optimal target sites for lung volume reduction (LVR) strategies, the presence of segmental, or sublobar, heterogeneity is often underappreciated. Objective The aim of this study was to understand lobar and segmental patterns of emphysema heterogeneity, which may more precisely indicate optimal target sites for LVR procedures. Methods Patterns of emphysema heterogeneity were evaluated in a representative cohort of 150 severe (GOLD stage III/IV) chronic obstructive pulmonary disease (COPD) patients from the COPDGene study. High-resolution computerized tomography analysis software was used to measure tissue destruction throughout the lungs to compute heterogeneity (≥ 15% difference in tissue destruction) between (inter-) and within (intra-) lobes for each patient. Emphysema tissue destruction was characterized segmentally to define patterns of heterogeneity. Results Segmental tissue destruction revealed interlobar heterogeneity in the left lung (57%) and right lung (52%). Intralobar heterogeneity was observed in at least one lobe of all patients. No patient presented true homogeneity at a segmental level. There was true homogeneity across both lungs in 3% of the cohort when defining heterogeneity as ≥ 30% difference in tissue destruction. Conclusion Many LVR technologies for treatment of emphysema have focused on interlobar heterogeneity and target an entire lobe per procedure. Our observations suggest that a high proportion of patients with emphysema are affected by interlobar as well as intralobar heterogeneity. These findings prompt the need for a segmental approach to LVR in the majority of patients to treat only the most diseased segments and preserve healthier ones. PMID:26430783

  12. Hippocampal subfield segmentation in temporal lobe epilepsy: Relation to outcomes.

    PubMed

    Kreilkamp, B A K; Weber, B; Elkommos, S B; Richardson, M P; Keller, S S

    2018-06-01

    To investigate the clinical and surgical outcome correlates of preoperative hippocampal subfield volumes in patients with refractory temporal lobe epilepsy (TLE) using a new magnetic resonance imaging (MRI) multisequence segmentation technique. We recruited 106 patients with TLE and hippocampal sclerosis (HS) who underwent conventional T1-weighted and T2 short TI inversion recovery MRI. An automated hippocampal segmentation algorithm was used to identify twelve subfields in each hippocampus. A total of 76 patients underwent amygdalohippocampectomy and postoperative seizure outcome assessment using the standardized ILAE classification. Semiquantitative hippocampal internal architecture (HIA) ratings were correlated with hippocampal subfield volumes. Patients with left TLE had smaller volumes of the contralateral presubiculum and hippocampus-amygdala transition area compared to those with right TLE. Patients with right TLE had reduced contralateral hippocampal tail volumes and improved outcomes. In all patients, there were no significant relationships between hippocampal subfield volumes and clinical variables such as duration and age at onset of epilepsy. There were no significant differences in any hippocampal subfield volumes between patients who were rendered seizure free and those with persistent postoperative seizure symptoms. Ipsilateral but not contralateral HIA ratings were significantly correlated with gross hippocampal and subfield volumes. Our results suggest that ipsilateral hippocampal subfield volumes are not related to the chronicity/severity of TLE. We did not find any hippocampal subfield volume or HIA rating differences in patients with optimal and unfavorable outcomes. In patients with TLE and HS, sophisticated analysis of hippocampal architecture on MRI may have limited value for prediction of postoperative outcome. © 2018 The Authors. Acta Neurologica Scandinavica Published by John Wiley & Sons Ltd.

  13. Optimization of coronagraph design for segmented aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave

    2017-09-01

    The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end-to-end telescope system stability requirements can be met. Ongoing work includes advances of the AFO algorithm reported here to design in additional robustness to a resolved star, and other phase or amplitude aberrations to be encountered in a real segmented aperture space telescope.

  14. A segmental analysis of current and future scanning and optimizing technology in the hardwood sawmill industry

    Treesearch

    S.A. Bowe; R.L. Smith; D. Earl Kline; Philip A. Araman

    2002-01-01

    A nationwide survey of advanced scanning and optimizing technology in the hardwood sawmill industry was conducted in the fall of 1999. Three specific hardwood sawmill technologies were examined that included current edger-optimizer systems, future edger-optimizer systems, and future automated grading systems. The objectives of the research were to determine differences...

  15. Socioeconomic inequality in excessive body weight in Indonesia.

    PubMed

    Aizawa, Toshiaki; Helble, Matthias

    2017-11-01

    Exploiting the Indonesian Family Life Survey (IFLS), this paper studies the transition of socioeconomic-related excess weight disparity, including overweight and obesity, from 1993 to 2014. First, we show that the proportions of overweight and obese people in Indonesia increased rapidly during the time period covered and that poorer groups exhibited a larger annual excess weight growth rate than richer groups (7.49 percent vs. 3.01 percent). Second, by calculating the concentration index, we confirm that the prevalence of obesity affected increasingly poorer segments of Indonesian society. Consequently, the concentration index decreased during the study period, from 0.287 to 0.093. Finally, decomposing the change in the concentration index of excess weight from 2000 to 2014, we show that a large part of the change can be explained by a decrease in the elasticity of wealth and improved sanitary conditions in poorer households. Overall, obesity in Indonesia no longer affects purely the wealthier segments of the population but the entire socioeconomic spectrum. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Whole-brain voxel-based morphometry in Kallmann syndrome associated with mirror movements.

    PubMed

    Koenigkam-Santos, M; Santos, A C; Borduqui, T; Versiani, B R; Hallak, J E C; Crippa, J A S; Castro, M

    2008-10-01

    There are 2 main hypotheses concerning the cause of mirror movements (MM) in Kallmann syndrome (KS): abnormal development of the primary motor system, involving the ipsilateral corticospinal tract; and lack of contralateral motor cortex inhibitory mechanisms, mainly through the corpus callosum. The purpose of our study was to determine white and gray matter volume changes in a KS population by using optimized voxel-based morphometry (VBM) and to investigate the relationship between the abnormalities and the presence of MM, addressing the 2 mentioned hypotheses. T1-weighted volumetric images from 21 patients with KS and 16 matched control subjects were analyzed with optimized VBM. Images were segmented and spatially normalized, and these deformation parameters were then applied to the original images before the second segmentation. Patients were divided into groups with and without MM, and a t test statistic was then applied on a voxel-by-voxel basis between the groups and controls to evaluate significant differences. When considering our hypothesis a priori, we found that 2 areas of increased gray matter volume, in the left primary motor and sensorimotor cortex, were demonstrated only in patients with MM, when compared with healthy controls. Regarding white matter alterations, no areas of altered volume involving the corpus callosum or the projection of the corticospinal tract were demonstrated. The VBM study did not show significant white matter changes in patients with KS but showed gray matter alterations in keeping with a hypertrophic response to a deficient pyramidal decussation in patients with MM. In addition, gray matter alterations were observed in patients without MM, which can represent more complex mechanisms determining the presence or absence of this symptom.

  17. [Gestational weight gain and optimal ranges in Chinese mothers giving singleton and full-term births in 2013].

    PubMed

    Wang, J; Duan, Y F; Pang, X H; Jiang, S; Yin, S A; Yang, Z Y; Lai, J Q

    2018-01-06

    Objective: To analyze the status of gestational weight gain (GWG) among Chinese mothers who gave singleton and full-term births, and to look at optimal GWG ranges. Methods: In 2013, using the multi-stage stratified and population proportional cluster sampling method, we investigated 8 323 mother-child pairs at their 0-24 months postpartum from 55 counties (cities/districts) of 30 provinces (except Tibet) in mainland China. Questionnaire was used to collect data on body weight before pregnancy and delivery, diseases during gestation, hemorrhage or not at postpartum, child birth weight and length, and other information about pregnant outcomes. We measured mother's body weight and height, and child's body weight and length. Based on 'Chinese Adult Body Weight Standard', we divided mothers into four groups according to their body weight before pregnancy: low weight (BMI<18.5 kg/m(2)), normal weight (BMI 18.5-23.9 kg/m(2)), overweight (BMI 24.0-27.9 kg/m(2)) and obesity (BMI≥28.0 kg/m(2)). The status of GWG was assessed by IOM optimal GWG guidelines. Chinese optimal GWG ranges were calculated according to the association of GWG with pregnant outcomes and anthropometry of mothers and children, and according to P25-P75 of GWG among mothers who had good pregnant outcomes and good anthropometry, and whose children had good anthropometry. The status of GWG was assessed by the new optimal ranges. Results: P50 (P25-P75) of GWG among the 8 323 mothers was 15.0 (10.0-19.0) kg. According to the proposed optimal GWG ranges of IOM, the proportions of inadequate, optimal and excessive GWG accounted for 27.2% (2 263 mothers), 36.2% (3 016 mothers) and 36.6% (3 044 mothers). The optimal GWG ranges for low weight, normal weight, overweight and obesity were 11.5-18.0, 10.0-15.0, 8.0-14.0 and 5.0-11.5 kg. Based on these optimal GWG ranges established in this study, the rates of inadequate, optimal and excessive GWG were 15.7% (1 303 mothers), 45.0% (3 744 mothers) and 39.3% (3 276 mothers), and these rates were significantly different from that defined by the IOM standards (χ2=345.36, P<0.001). Conclusion: The median of GWG among Chinese mothers is 15.0 kg, which is at a relatively higher level. This study suggests the optimal GWG ranges for Chinese women who give singleton and full-term babies, which appears lower than IOM's.

  18. Optimizing SGLT inhibitor treatment for diabetes with chronic kidney diseases.

    PubMed

    Layton, Anita T

    2018-06-28

    Diabetes induces glomerular hyperfiltration, affects kidney function, and may lead to chronic kidney diseases. A novel therapeutic treatment for diabetic patients targets the sodium-glucose cotransporter isoform 2 (SGLT2) in the kidney. SGLT2 inhibitors enhance urinary glucose, [Formula: see text] and fluid excretion and lower hyperglycemia in diabetes by inhibiting [Formula: see text] and glucose reabsorption along the proximal convoluted tubule. A goal of this study is to predict the effects of SGLT2 inhibitors in diabetic patients with and without chronic kidney diseases. To that end, we applied computational rat kidney models to assess how SGLT2 inhibition affects renal solute transport and metabolism when nephron population are normal or reduced (the latter simulates chronic kidney disease). The model predicts that SGLT2 inhibition induces glucosuria and natriuresis, with those effects enhanced in a remnant kidney. The model also predicts that the [Formula: see text] transport load and thus oxygen consumption of the S3 segment are increased under SGLT2 inhibition, a consequence that may increase the risk of hypoxia for that segment. To protect the vulnerable S3 segment, we explore dual SGLT2/SGLT1 inhibition and seek to determine the optimal combination that would yield sufficient urinary glucose excretion while limiting the metabolic load on the S3 segment. The model predicts that the optimal combination of SGLT2/SGLT1 inhibition lowers the oxygen requirements of key tubular segments, but decreases urine flow and [Formula: see text] excretion; the latter effect may limit the cardiovascular protection of the treatment.

  19. Weight optimization of plane truss using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay

    2017-11-01

    Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.

  20. The activation of segmental and tonal information in visual word recognition.

    PubMed

    Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan

    2013-08-01

    Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.

  1. Study on light weight design of truss structures of spacecrafts

    NASA Astrophysics Data System (ADS)

    Zeng, Fuming; Yang, Jianzhong; Wang, Jian

    2015-08-01

    Truss structure is usually adopted as the main structure form for spacecrafts due to its high efficiency in supporting concentrated loads. Light-weight design is now becoming the primary concern during conceptual design of spacecrafts. Implementation of light-weight design on truss structure always goes through three processes: topology optimization, size optimization and composites optimization. During each optimization process, appropriate algorithm such as the traditional optimality criterion method, mathematical programming method and the intelligent algorithms which simulate the growth and evolution processes in nature will be selected. According to the practical processes and algorithms, combined with engineering practice and commercial software, summary is made for the implementation of light-weight design on truss structure for spacecrafts.

  2. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polan, D; Brady, S; Kaufman, R

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using anmore » anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT protocol parameters with an average DSC of 0.86 ± 0.04 (range: 0.80–0.99).« less

  3. Short-term vs. long-term heart rate variability in ischemic cardiomyopathy risk stratification.

    PubMed

    Voss, Andreas; Schroeder, Rico; Vallverdú, Montserrat; Schulz, Steffen; Cygankiewicz, Iwona; Vázquez, Rafael; Bayés de Luna, Antoni; Caminal, Pere

    2013-01-01

    In industrialized countries with aging populations, heart failure affects 0.3-2% of the general population. The investigation of 24 h-ECG recordings revealed the potential of nonlinear indices of heart rate variability (HRV) for enhanced risk stratification in patients with ischemic heart failure (IHF). However, long-term analyses are time-consuming, expensive, and delay the initial diagnosis. The objective of this study was to investigate whether 30 min short-term HRV analysis is sufficient for comparable risk stratification in IHF in comparison to 24 h-HRV analysis. From 256 IHF patients [221 at low risk (IHFLR) and 35 at high risk (IHFHR)] (a) 24 h beat-to-beat time series (b) the first 30 min segment (c) the 30 min most stationary day segment and (d) the 30 min most stationary night segment were investigated. We calculated linear (time and frequency domain) and nonlinear HRV analysis indices. Optimal parameter sets for risk stratification in IHF were determined for 24 h and for each 30 min segment by applying discriminant analysis on significant clinical and non-clinical indices. Long- and short-term HRV indices from frequency domain and particularly from nonlinear dynamics revealed high univariate significances (p < 0.01) discriminating between IHFLR and IHFHR. For multivariate risk stratification, optimal mixed parameter sets consisting of 5 indices (clinical and nonlinear) achieved 80.4% AUC (area under the curve of receiver operating characteristics) from 24 h HRV analysis, 84.3% AUC from first 30 min, 82.2 % AUC from daytime 30 min and 81.7% AUC from nighttime 30 min. The optimal parameter set obtained from the first 30 min showed nearly the same classification power when compared to the optimal 24 h-parameter set. As results from stationary daytime and nighttime, 30 min segments indicate that short-term analyses of 30 min may provide at least a comparable risk stratification power in IHF in comparison to a 24 h analysis period.

  4. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  5. Topology optimization of reduced rare-earth permanent magnet arrays with finite coercivity

    NASA Astrophysics Data System (ADS)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Rowe, A.

    2018-05-01

    The supply chain risk of rare-earth permanent magnets has yielded research efforts to improve both materials and magnetic circuits. While a number of magnet optimization techniques exist, literature has not incorporated the permanent magnet failure process stemming from finite coercivity. To address this, a mixed-integer topology optimization is formulated to maximize the flux density of a segmented Halbach cylinder while avoiding permanent demagnetization. The numerical framework is used to assess the efficacy of low-cost (rare-earth-free ferrite C9), medium-cost (rare-earth-free MnBi), and higher-cost (Dy-free NdFeB) permanent magnet materials. Novel magnet designs are generated that produce flux densities 70% greater than the segmented Halbach array, albeit with increased magnet mass. Three optimization formulations are then explored using ferrite C9 that demonstrates the trade-off between manufacturability and design sophistication, generating flux densities in the range of 0.366-0.483 T.

  6. An Introduction to System-Level, Steady-State and Transient Modeling and Optimization of High-Power-Density Thermoelectric Generator Devices Made of Segmented Thermoelectric Elements

    NASA Astrophysics Data System (ADS)

    Crane, D. T.

    2011-05-01

    High-power-density, segmented, thermoelectric (TE) elements have been intimately integrated into heat exchangers, eliminating many of the loss mechanisms of conventional TE assemblies, including the ceramic electrical isolation layer. Numerical models comprising simultaneously solved, nonlinear, energy balance equations have been created to simulate these novel architectures. Both steady-state and transient models have been created in a MATLAB/Simulink environment. The models predict data from experiments in various configurations and applications over a broad range of temperature, flow, and current conditions for power produced, efficiency, and a variety of other important outputs. Using the validated models, devices and systems are optimized using advanced multiparameter optimization techniques. Devices optimized for particular steady-state operating conditions can then be dynamically simulated in a transient operating model. The transient model can simulate a variety of operating conditions including automotive and truck drive cycles.

  7. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  8. Motion generation of peristaltic mobile robot with particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Homma, Takahiro; Kamamichi, Norihiro

    2015-03-01

    In developments of robots, bio-mimetics is attracting attention, which is a technology for the design of the structure and function inspired from biological system. There are a lot of examples of bio-mimetics in robotics such as legged robots, flapping robots, insect-type robots, fish-type robots. In this study, we focus on the motion of earthworm and aim to develop a peristaltic mobile robot. The earthworm is a slender animal moving in soil. It has a segmented body, and each segment can be shorted and lengthened by muscular actions. It can move forward by traveling expanding motions of each segment backward. By mimicking the structure and motion of the earthworm, we can construct a robot with high locomotive performance against an irregular ground or a narrow space. In this paper, to investigate the motion analytically, a dynamical model is introduced, which consist of a series-connected multi-mass model. Simple periodic patterns which mimic the motions of earthworms are applied in an open-loop fashion, and the moving patterns are verified through numerical simulations. Furthermore, to generate efficient motion of the robot, a particle swarm optimization algorithm, one of the meta-heuristic optimization, is applied. The optimized results are investigated by comparing to simple periodic patterns.

  9. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  10. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  11. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  12. Comparison of status variables among accident and non-accident airmen from the active airman population.

    DOT National Transportation Integrated Search

    1970-12-01

    The distributions of age, weight, height, body weight/body surface area and ponderal index for the accident versus non-accident segments of the active airman population were compared for years 1966-1967. : The differences in the distributions of thes...

  13. Distribution path robust optimization of electric vehicle with multiple distribution centers

    PubMed Central

    Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi

    2018-01-01

    To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169

  14. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  15. Three-dimensional data-tracking dynamic optimization simulations of human locomotion generated by direct collocation.

    PubMed

    Lin, Yi-Chung; Pandy, Marcus G

    2017-07-05

    The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  17. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  18. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  19. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  20. Do 3D Printing Models Improve Anatomical Teaching About Hepatic Segments to Medical Students? A Randomized Controlled Study.

    PubMed

    Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Huang, Wenhua; Li, Jianyi

    2016-08-01

    It is a difficult and frustrating task for young surgeons and medical students to understand the anatomy of hepatic segments. We tried to develop an optimal 3D printing model of hepatic segments as a teaching aid to improve the teaching of hepatic segments. A fresh human cadaveric liver without hepatic disease was CT scanned. After 3D reconstruction, three types of 3D computer models of hepatic structures were designed and 3D printed as models of hepatic segments without parenchyma (type 1) and with transparent parenchyma (type 2), and hepatic ducts with segmental partitions (type 3). These models were evaluated by six experts using a five-point Likert scale. Ninety two medical freshmen were randomized into four groups to learn hepatic segments with the aid of the three types of models and traditional anatomic atlas (TAA). Their results of two quizzes were compared to evaluate the teaching effects of the four methods. Three types of models were successful produced which displayed the structures of hepatic segments. By experts' evaluation, type 3 model was better than type 1 and 2 models in anatomical condition, type 2 and 3 models were better than type 1 model in tactility, and type 3 model was better than type 1 model in overall satisfaction (P < 0.05). The first quiz revealed that type 1 model was better than type 2 model and TAA, while type 3 model was better than type 2 and TAA in teaching effects (P < 0.05). The second quiz found that type 1 model was better than TAA, while type 3 model was better than type 2 model and TAA regarding teaching effects (P < 0.05). Only TAA group had significant declines between two quizzes (P < 0.05). The model with segmental partitions proves to be optimal, because it can best improve anatomical teaching about hepatic segments.

  1. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  2. Automated registration of multispectral MR vessel wall images of the carotid artery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klooster, R. van 't; Staring, M.; Reiber, J. H. C.

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purposemore » of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and moving image after registration. Results: The average required manual translation per image slice was 1.33 mm. Translations were larger as the patient was longer inside the scanner. Manual alignment took 187.5 s per patient resulting in a mean surface distance of 0.271 ± 0.127 mm. After minimal user interaction to generate the mask in the fixed image, the remaining sequences are automatically registered with a computation time of 52.0 s per patient. The optimal registration strategy used a circular mask with a diameter of 10 mm, a 3D B-spline transformation model with a control point spacing of 15 mm, mutual information as image similarity metric, and the precontrast T1W TSE as fixed image. A mean surface distance of 0.288 ± 0.128 mm was obtained with these settings, which is very close to the accuracy of the manual alignment procedure. The exact registration parameters and software were made publicly available. Conclusions: An automated registration method was developed and optimized, only needing two mouse clicks to mark the start and end point of the artery. Validation on a large group of patients showed that automated image registration has similar accuracy as the manual alignment procedure, substantially reducing the amount of user interactions needed, and is multiple times faster. In conclusion, the authors believe that the proposed automated method can replace the current manual procedure, thereby reducing the time to analyze the images.« less

  3. Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.

    PubMed

    Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen

    2016-07-01

    This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.

  4. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  5. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  6. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  7. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  8. Probabilistic atlas-based segmentation of combined T1-weighted and DUTE MRI for calculation of head attenuation maps in integrated PET/MRI scanners.

    PubMed

    Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian

    2014-01-01

    We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.

  9. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  10. Can we predict body height from segmental bone length measurements? A study of 3,647 children.

    PubMed

    Cheng, J C; Leung, S S; Chiu, B S; Tse, P W; Lee, C W; Chan, A K; Xia, G; Leung, A K; Xu, Y Y

    1998-01-01

    It is well known that significant differences exist in the anthropometric data of different races and ethnic groups. This is a cross-sectional study on segmental bone length based on 3,647 Chinese children of equal sex distribution aged 3-18 years. The measurements included standing height, weight, arm span, foot length, and segmental bone length of the humerus, radius, ulna, and tibia. A normality growth chart of all the measured parameters was constructed. Statistical analysis of the results showed a very high linear correlation of height with arm span, foot length, and segmental bone lengths with a correlation coefficient of 0.96-0.99 for both sexes. No differences were found between the right and left side of all the segmental bone lengths. These Chinese children were found to have a proportional limb segmental length relative to the trunk.

  11. Prototype Development of the GMT Fast Steering Mirror

    NASA Astrophysics Data System (ADS)

    Kim, Young-Soo; Koh, J.; Jung, H.; Jung, H.; Cho, M. K.; Park, W.; Yang, H.; Kim, H.; Lee, K.; Ahn, H.; Park, B.

    2013-06-01

    A Fast Steering Mirror (FSM) is going to be produced as a secondary mirror of the Giant Magellan Telescope (GMT). FSM is 3.2 m in diameter and the focal ratio is 0.65. It is composed of seven circular segments which match with the primary mirror segments. Each segment contains a light-weighted mirror whose diameter is 1.1 m. It also contains tip-tilt actuators which would compensate wind effect and structure jitter. An FSM prototype (FSMP) has been developed, which consists of a full-size off-axis mirror segment and a tip-tilt test-bed. The main purpose of the FSMP development is to achieve key technologies, such as fabrication of highly aspheric off-axis mirror and tip-tilt actuation. The development has been conducted by a consortium of five institutions in Korea and USA, and led by Korea Astronomy and Space Science Institute. The mirror was light-weighted and grinding of the front surface was finished. Polishing is in progress with computer generated hologram tests. The tip-tilt test-bed has been manufactured and assembled. Frequency tests are being performed and optical tilt set-up is arranged for visual demonstration. In this paper, we present progress of the prototype development, and future works.

  12. Bokeh mirror alignment for Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Ahnen, M. L.; Baack, D.; Balbo, M.; Bergmann, M.; Biland, A.; Blank, M.; Bretz, T.; Bruegge, K. A.; Buss, J.; Domke, M.; Dorner, D.; Einecke, S.; Hempfling, C.; Hildebrand, D.; Hughes, G.; Lustermann, W.; Mannheim, K.; Mueller, S. A.; Neise, D.; Neronov, A.; Noethe, M.; Overkemping, A.-K.; Paravac, A.; Pauss, F.; Rhode, W.; Shukla, A.; Temme, F.; Thaele, J.; Toscano, S.; Vogler, P.; Walter, R.; Wilbert, A.

    2016-09-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) need imaging optics with large apertures and high image intensities to map the faint Cherenkov light emitted from cosmic ray air showers onto their image sensors. Segmented reflectors fulfill these needs, and composed from mass production mirror facets they are inexpensive and lightweight. However, as the overall image is a superposition of the individual facet images, alignment remains a challenge. Here we present a simple, yet extendable method, to align a segmented reflector using its Bokeh. Bokeh alig nment does not need a star or good weather nights but can be done even during daytime. Bokeh alignment optimizes the facet orientations by comparing the segmented reflectors Bokeh to a predefined template. The optimal Bokeh template is highly constricted by the reflector's aperture and is easy accessible. The Bokeh is observed using the out of focus image of a near by point like light source in a distance of about 10 focal lengths. We introduce Bokeh alignment on segmented reflectors and demonstrate it on the First Geiger-mode Avalanche Cherenkov Telescope (FACT) on La Palma, Spain.

  13. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.

    2014-12-01

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  14. Modeling the relaxation of internal DNA segments during genome mapping in nanochannels.

    PubMed

    Jain, Aashish; Sheats, Julian; Reifenberger, Jeffrey G; Cao, Han; Dorfman, Kevin D

    2016-09-01

    We have developed a multi-scale model describing the dynamics of internal segments of DNA in nanochannels used for genome mapping. In addition to the channel geometry, the model takes as its inputs the DNA properties in free solution (persistence length, effective width, molecular weight, and segmental hydrodynamic radius) and buffer properties (temperature and viscosity). Using pruned-enriched Rosenbluth simulations of a discrete wormlike chain model with circa 10 base pair resolution and a numerical solution for the hydrodynamic interactions in confinement, we convert these experimentally available inputs into the necessary parameters for a one-dimensional, Rouse-like model of the confined chain. The resulting coarse-grained model resolves the DNA at a length scale of approximately 6 kilobase pairs in the absence of any global hairpin folds, and is readily studied using a normal-mode analysis or Brownian dynamics simulations. The Rouse-like model successfully reproduces both the trends and order of magnitude of the relaxation time of the distance between labeled segments of DNA obtained in experiments. The model also provides insights that are not readily accessible from experiments, such as the role of the molecular weight of the DNA and location of the labeled segments that impact the statistical models used to construct genome maps from data acquired in nanochannels. The multi-scale approach used here, while focused towards a technologically relevant scenario, is readily adapted to other channel sizes and polymers.

  15. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  16. Localized Statistics for DW-MRI Fiber Bundle Segmentation

    PubMed Central

    Lankton, Shawn; Melonakos, John; Malcolm, James; Dambreville, Samuel; Tannenbaum, Allen

    2013-01-01

    We describe a method for segmenting neural fiber bundles in diffusion-weighted magnetic resonance images (DWMRI). As these bundles traverse the brain to connect regions, their local orientation of diffusion changes drastically, hence a constant global model is inaccurate. We propose a method to compute localized statistics on orientation information and use it to drive a variational active contour segmentation that accurately models the non-homogeneous orientation information present along the bundle. Initialized from a single fiber path, the proposed method proceeds to capture the entire bundle. We demonstrate results using the technique to segment the cingulum bundle and describe several extensions making the technique applicable to a wide range of tissues. PMID:23652079

  17. Epidermal segmentation in high-definition optical coherence tomography.

    PubMed

    Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang

    2015-01-01

    Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.

  18. On the role of the optimization algorithm of RapidArc(®) volumetric modulated arc therapy on plan quality and efficiency.

    PubMed

    Vanetti, Eugenio; Nicolini, Giorgia; Nord, Janne; Peltola, Jarkko; Clivio, Alessandro; Fogliata, Antonella; Cozzi, Luca

    2011-11-01

    The RapidArc volumetric modulated arc therapy (VMAT) planning process is based on a core engine, the so-called progressive resolution optimizer (PRO). This is the optimization algorithm used to determine the combination of field shapes, segment weights (with dose rate and gantry speed variations), which best approximate the desired dose distribution in the inverse planning problem. A study was performed to assess the behavior of two versions of PRO. These two versions mostly differ in the way continuous variables describing the modulated arc are sampled into discrete control points, in the planning efficiency and in the presence of some new features. The analysis aimed to assess (i) plan quality, (ii) technical delivery aspects, (iii) agreement between delivery and calculations, and (iv) planning efficiency of the two versions. RapidArc plans were generated for four groups of patients (five patients each): anal canal, advanced lung, head and neck, and multiple brain metastases and were designed to test different levels of planning complexity and anatomical features. Plans from optimization with PRO2 (first generation of RapidArc optimizer) were compared against PRO3 (second generation of the algorithm). Additional plans were optimized with PRO3 using new features: the jaw tracking, the intermediate dose and the air cavity correction options. Results showed that (i) plan quality was generally improved with PRO3 and, although not for all parameters, some of the scored indices showed a macroscopic improvement with PRO3. (ii) PRO3 optimization leads to simpler patterns of the dynamic parameters particularly for dose rate. (iii) No differences were observed between the two algorithms in terms of pretreatment quality assurance measurements and (iv) PRO3 optimization was generally faster, with a time reduction of a factor approximately 3.5 with respect to PRO2. These results indicate that PRO3 is either clinically beneficial or neutral in terms of dosimetric quality while it showed significant advantages in speed and technical aspects.

  19. Pregnancy Is a Risk Factor for Secondary Focal Segmental Glomerulosclerosis in Women with a History of Very Low Birth Weight

    PubMed Central

    Tanaka, Mari; Iwanari, Sachio; Tsujimoto, Yasushi; Taniguchi, Keisuke; Hagihara, Koichiro; Fumihara, Daiki; Miki, Syo; Shimoda, Saeko; Ikeda, Masaki; Takeoka, Hiroya

    2017-01-01

    Low birth weight (LBW) has been known to increase the susceptibility to renal injury in adulthood. A 26-year-old woman developed proteinuria in early pregnancy; she had been born with very LBW. The clinical course was progressive, and an emergency Caesarean section was performed at 36 weeks due to acute kidney injury. A renal biopsy provided a diagnosis of post-adaptive focal segmental glomerulosclerosis. Increased demand for glomerular filtration during early pregnancy appeared to have initiated the renal injury. This report highlights the fact that pregnancy might be a risk factor for renal injury in women born with LBW. PMID:28626180

  20. Pregnancy Is a Risk Factor for Secondary Focal Segmental Glomerulosclerosis in Women with a History of Very Low Birth Weight.

    PubMed

    Tanaka, Mari; Iwanari, Sachio; Tsujimoto, Yasushi; Taniguchi, Keisuke; Hagihara, Koichiro; Fumihara, Daiki; Miki, Syo; Shimoda, Saeko; Ikeda, Masaki; Takeoka, Hiroya

    2017-01-01

    Low birth weight (LBW) has been known to increase the susceptibility to renal injury in adulthood. A 26-year-old woman developed proteinuria in early pregnancy; she had been born with very LBW. The clinical course was progressive, and an emergency Caesarean section was performed at 36 weeks due to acute kidney injury. A renal biopsy provided a diagnosis of post-adaptive focal segmental glomerulosclerosis. Increased demand for glomerular filtration during early pregnancy appeared to have initiated the renal injury. This report highlights the fact that pregnancy might be a risk factor for renal injury in women born with LBW.

  1. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool.

    PubMed

    Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R

    2015-11-21

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  2. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  3. Adhesion promotion at a homopolymer-solid interface using random heteropolymers

    NASA Astrophysics Data System (ADS)

    Simmons, Edward Read; Chakraborty, Arup K.

    1998-11-01

    We investigate the potential uses for random heteropolymers (RHPs) as adhesion promoters between a homopolymer melt and a solid surface. We consider homopolymers of monomer (segment) type A which are naturally repelled from a solid surface. To this system we add RHPs with both A and B (attractive to the surface) type monomers to promote adhesion between the two incompatible substrates. We employ Monte Carlo simulations to investigate the effects of variations in the sequence statistics of the RHPs, amount of promoter added, and strength of the segment-segment and segment-surface interaction parameters. Clearly, the parameter space in such a system is quite large, but we are able to describe, in a qualitative manner, the optimal parameters for adhesion promotion. The optimal set of parameters yield interfacial conformational statistics for the RHPs which have a relatively high adsorbed fraction and also long loops extending away from the surface that promote entanglements with the bulk homopolymer melt. In addition, we present qualitative evidence that the concentration of RHP segments per surface site plays an important role in determining the mechanism of failure (cohesive versus adhesive) at such an interface. Our results also provide the necessary input for future simulations in which the system may be strained to the limit of fracture.

  4. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  5. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool

    NASA Astrophysics Data System (ADS)

    Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the

    2015-11-01

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  6. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  7. [Tumor segmentation of brain MRI with adaptive bandwidth mean shift].

    PubMed

    Hou, Xiaowen; Liu, Qi

    2014-10-01

    In order to get the adaptive bandwidth of mean shift to make the tumor segmentation of brain magnetic resonance imaging (MRI) to be more accurate, we in this paper present an advanced mean shift method. Firstly, we made use of the space characteristics of brain image to eliminate the impact on segmentation of skull; and then, based on the characteristics of spatial agglomeration of different tissues of brain (includes tumor), we applied edge points to get the optimal initial mean value and the respectively adaptive bandwidth, in order to improve the accuracy of tumor segmentation. The results of experiment showed that, contrast to the fixed bandwidth mean shift method, the method in this paper could segment the tumor more accurately.

  8. Use of C-Arm Cone Beam CT During Hepatic Radioembolization: Protocol Optimization for Extrahepatic Shunting and Parenchymal Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoven, Andor F. van den, E-mail: a.f.vandenhoven@umcutrecht.nl; Prince, Jip F.; Keizer, Bart de

    PurposeTo optimize a C-arm computed tomography (CT) protocol for radioembolization (RE), specifically for extrahepatic shunting and parenchymal enhancement.Materials and MethodsA prospective development study was performed per IDEAL recommendations. A literature-based protocol was applied in patients with unresectable and chemorefractory liver malignancies undergoing an angiography before radioembolization. Contrast and scan settings were adjusted stepwise and repeatedly reviewed in a consensus meeting. Afterwards, two independent raters analyzed all scans. A third rater evaluated the SPECT/CT scans as a reference standard for extrahepatic shunting and lack of target segment perfusion.ResultsFifty scans were obtained in 29 procedures. The first protocol, using a 6 s delaymore » and 10 s scan, showed insufficient parenchymal enhancement. In the second protocol, the delay was determined by timing parenchymal enhancement on DSA power injection (median 8 s, range 4–10 s): enhancement improved, but breathing artifacts increased (from 0 to 27 %). Since the third protocol with a 5 s scan decremented subjective image quality, the second protocol was deemed optimal. Median CNR (range) was 1.7 (0.6–3.2), 2.2 (−1.4–4.0), and 2.1 (−0.3–3.0) for protocol 1, 2, and 3 (p = 0.80). Delineation of perfused segments was possible in 57, 73, and 44 % of scans (p = 0.13). In all C-arm CTs combined, the negative predictive value was 95 % for extrahepatic shunting and 83 % for lack of target segment perfusion.ConclusionAn optimized C-arm CT protocol was developed that can be used to detect extrahepatic shunts and non-perfusion of target segments during RE.« less

  9. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    PubMed

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-21

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ± 40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design.

  10. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.

  11. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745

  12. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  13. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    PubMed

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  14. Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W

    2016-10-01

    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Need for denser geodetic network to get real constrain on the fault behavior along the Main Marmara Sea segments of the NAF, toward an optimized GPS network.

    NASA Astrophysics Data System (ADS)

    Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.

    2016-12-01

    Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.

  16. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  17. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  18. Resource atlases for multi-atlas brain segmentations with multiple ontology levels based on T1-weighted MRI.

    PubMed

    Wu, Dan; Ma, Ting; Ceritoglu, Can; Li, Yue; Chotiyanonta, Jill; Hou, Zhipeng; Hsu, John; Xu, Xin; Brown, Timothy; Miller, Michael I; Mori, Susumu

    2016-01-15

    Technologies for multi-atlas brain segmentation of T1-weighted MRI images have rapidly progressed in recent years, with highly promising results. This approach, however, relies on a large number of atlases with accurate and consistent structural identifications. Here, we introduce our atlas inventories (n=90), which cover ages 4-82years with unique hierarchical structural definitions (286 structures at the finest level). This multi-atlas library resource provides the flexibility to choose appropriate atlases for various studies with different age ranges and structure-definition criteria. In this paper, we describe the details of the atlas resources and demonstrate the improved accuracy achievable with a dynamic age-matching approach, in which atlases that most closely match the subject's age are dynamically selected. The advanced atlas creation strategy, together with atlas pre-selection principles, is expected to support the further development of multi-atlas image segmentation. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Partitioned-Interval Quantum Optical Communications Receiver

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.

    2013-01-01

    The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

  20. Integration of safety engineering into a cost optimized development program.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  1. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  2. Magnetic resonance enterography has good inter-rater agreement and diagnostic accuracy for detecting inflammation in pediatric Crohn disease.

    PubMed

    Church, Peter C; Greer, Mary-Louise C; Cytter-Kuint, Ruth; Doria, Andrea S; Griffiths, Anne M; Turner, Dan; Walters, Thomas D; Feldman, Brian M

    2017-05-01

    Magnetic resonance enterography (MRE) is increasingly relied upon for noninvasive assessment of intestinal inflammation in Crohn disease. However very few studies have examined the diagnostic accuracy of individual MRE signs in children. We have created an MR-based multi-item measure of intestinal inflammation in children with Crohn disease - the Pediatric Inflammatory Crohn's MRE Index (PICMI). To inform item selection for this instrument, we explored the inter-rater agreement and diagnostic accuracy of individual MRE signs of inflammation in pediatric Crohn disease and compared our findings with the reference standards of the weighted Pediatric Crohn's Disease Activity Index (wPCDAI) and C-reactive protein (CRP). In this cross-sectional single-center study, MRE studies in 48 children with diagnosed Crohn disease (66% male, median age 15.5 years) were reviewed by two independent radiologists for the presence of 15 MRE signs of inflammation. Using kappa statistics we explored inter-rater agreement for each MRE sign across 10 anatomical segments of the gastrointestinal tract. We correlated MRE signs with the reference standards using correlation coefficients. Radiologists measured the length of inflamed bowel in each segment of the gastrointestinal tract. In each segment, MRE signs were scored as either binary (0-absent, 1-present), or ordinal (0-absent, 1-mild, 2-marked). These segmental scores were weighted by the length of involved bowel and were summed to produce a weighted score per patient for each MRE sign. Using a combination of wPCDAI≥12.5 and CRP≥5 to define active inflammation, we calculated area under the receiver operating characteristic curve (AUC) for each weighted MRE sign. Bowel wall enhancement, wall T2 hyperintensity, wall thickening and wall diffusion-weighted imaging (DWI) hyperintensity were most commonly identified. Inter-rater agreement was best for decreased motility and wall DWI hyperintensity (kappa≥0.64). Correlation between MRE signs and wPCDAI was higher than with CRP. AUC was highest (≥0.75) for ulcers, wall enhancement, wall thickening, wall T2 hyperintensity and wall DWI hyperintensity. Some MRE signs had good inter-rater agreement and AUC for detection of inflammation in children with Crohn disease.

  3. Contribution of calcaneal and leg segment rotations to ankle joint dorsiflexion in a weight-bearing task.

    PubMed

    Chizewski, Michael G; Chiu, Loren Z F

    2012-05-01

    Joint angle is the relative rotation between two segments where one is a reference and assumed to be non-moving. However, rotation of the reference segment will influence the system's spatial orientation and joint angle. The purpose of this investigation was to determine the contribution of leg and calcaneal rotations to ankle rotation in a weight-bearing task. Forty-eight individuals performed partial squats recorded using a 3D motion capture system. Markers on the calcaneus and leg were used to model leg and calcaneal segment, and ankle joint rotations. Multiple linear regression was used to determine the contribution of leg and calcaneal segment rotations to ankle joint dorsiflexion. Regression models for left (R(2)=0.97) and right (R(2)=0.97) ankle dorsiflexion were significant. Sagittal plane leg rotation had a positive influence (left: β=1.411; right: β=1.418) while sagittal plane calcaneal rotation had a negative influence (left: β=-0.573; right: β=-0.650) on ankle dorsiflexion. Sagittal plane rotations of the leg and calcaneus were positively correlated (left: r=0.84, P<0.001; right: r=0.80, P<0.001). During a partial squat, the calcaneus rotates forward. Simultaneous forward calcaneal rotation with ankle dorsiflexion reduces total ankle dorsiflexion angle. Rear foot posture is reoriented during a partial squat, allowing greater leg rotation in the sagittal plane. Segment rotations may provide greater insight into movement mechanics that cannot be explained via joint rotations alone. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Watershed-based segmentation of the corpus callosum in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Freitas, Pedro; Rittner, Leticia; Appenzeller, Simone; Lapa, Aline; Lotufo, Roberto

    2012-02-01

    The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres, and is related to several neurodegenerative diseases. Since segmentation is usually the first step for studies in this structure, and manual volumetric segmentation is a very time-consuming task, it is important to have a robust automatic method for CC segmentation. We propose here an approach for fully automatic 3D segmentation of the CC in the magnetic resonance diffusion tensor images. The method uses the watershed transform and is performed on the fractional anisotropy (FA) map weighted by the projection of the principal eigenvector in the left-right direction. The section of the CC in the midsagittal slice is used as seed for the volumetric segmentation. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC without any user intervention, with great results when compared to manual segmentation. Since it is simple, fast and does not require parameter settings, the proposed method is well suited for clinical applications.

  5. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging.

    PubMed

    Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie; Hu, Qingmao

    2016-01-01

    Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L 0 -norm/ L 1 -norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.

  6. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  7. Birth weight and infant growth: optimal infant weight gain versus optimal infant weight.

    PubMed

    Xiong, Xu; Wightkin, Joan; Magnus, Jeanette H; Pridjian, Gabriella; Acuna, Juan M; Buekens, Pierre

    2007-01-01

    Infant growth assessment often focuses on "optimal" infant weights and lengths at specific ages, while de-emphasizing infant weight gain. Objective of this study was to examine infant growth patterns by measuring infant weight gain relative to birth weight. We conducted this study based on data collected in a prospective cohort study including 3,302 births with follow up examinations of infants between the ages of 8 and 18 months. All infants were participants in the Louisiana State Women, Infant and Children Supplemental Food Program between 1999 and 2001. Growth was assessed by infant weight gain percentage (IWG%, defined as infant weight gain divided by birth weight) as well as by mean z-scores and percentiles for weight-for-age, length-for-age, and weight-for-length calculated based on growth charts published by the U.S. Centers for Disease Control (CDC). An inverse relationship was noted between birth weight category and IWG% (from 613.9% for infants with birth weights <1500 g to 151.3% for infants with birth weights of 4000 g or more). In contrast, low birth weight infants had lower weight-for-age, weight-for-length z-scores and percentiles compared to normal birth weight infants according to CDC growth charts. Although low birth weight infants had lower anthropometric measures compared to a national reference population, they had significant catch-up growth; High birth weight infants had significant slow-down growth. We suggest that growth assessments should compare infants' anthropometric data to their own previous growth measures as well as to a reference population. Further studies are needed to identify optimal ranges of infant weight gain.

  8. Effects of Birth Weight on Anterior Segment Measurements in Full-Term Children Without Low Birth Weight by Dual-Scheimpflug Analyzer.

    PubMed

    Yeter, Volkan; Aritürk, Nurşen; Bİrİncİ, Hakki; Süllü, Yüksel; Güngör, İncİ

    2015-10-01

    To evaluate the effects of birth weight on ocular anterior segment parameters in full-term children without low birth weight using the Galilei Dual-Scheimpflug Analyzer. Retrospective cohort study. The right eyes from 110 healthy children, 3-6 years of age, were scanned with the Galilei Dual-Scheimpflug Analyzer. A total of 78 eyes were measured in full-term children with birth weight of >2500 g. Central, paracentral, pericentral, and the thinnest corneal thicknesses; anterior and posterior keratometry (average, steep, flat); axial curvatures; asphericity of cornea; anterior chamber depth and volume; and iridocorneal angle values were measured. Axial length, lens thickness, and vitreous length were obtained by ultrasound biometry. The mean age of children was 55.86 ± 12.52 (mean ± SD) months. Mean birth weight and gestational age were 3426.3 ± 545 g and 39.4 ± 1.2 weeks, respectively. Although lens thickness, vitreous length, axial length, and anterior chamber volume were moderately correlated with birth weight (P < .05), there was no relationship between birth weight and anterior chamber depth. With the exception of pericentral corneal thickness, all regions of corneal thicknesses were correlated with birth weight (P < .05). Birth weight was negatively correlated with anterior curvature (P < .05) and had no relationship to posterior curvature. While central and paracentral axial curvatures correlated with birth weight (P < .05), pericentral axial curvature did not. Preschoolers who were born heavier had thicker cornea and lens, longer axial length, and flatter corneal curve. The thicknesses and axial curves of central cornea within 7 mm may be particularly associated with birth weight. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Optimizing the 3D-reconstruction technique for serial block-face scanning electron microscopy.

    PubMed

    Wernitznig, Stefan; Sele, Mariella; Urschler, Martin; Zankel, Armin; Pölt, Peter; Rind, F Claire; Leitinger, Gerd

    2016-05-01

    Elucidating the anatomy of neuronal circuits and localizing the synaptic connections between neurons, can give us important insights in how the neuronal circuits work. We are using serial block-face scanning electron microscopy (SBEM) to investigate the anatomy of a collision detection circuit including the Lobula Giant Movement Detector (LGMD) neuron in the locust, Locusta migratoria. For this, thousands of serial electron micrographs are produced that allow us to trace the neuronal branching pattern. The reconstruction of neurons was previously done manually by drawing cell outlines of each cell in each image separately. This approach was very time consuming and troublesome. To make the process more efficient a new interactive software was developed. It uses the contrast between the neuron under investigation and its surrounding for semi-automatic segmentation. For segmentation the user sets starting regions manually and the algorithm automatically selects a volume within the neuron until the edges corresponding to the neuronal outline are reached. Internally the algorithm optimizes a 3D active contour segmentation model formulated as a cost function taking the SEM image edges into account. This reduced the reconstruction time, while staying close to the manual reference segmentation result. Our algorithm is easy to use for a fast segmentation process, unlike previous methods it does not require image training nor an extended computing capacity. Our semi-automatic segmentation algorithm led to a dramatic reduction in processing time for the 3D-reconstruction of identified neurons. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. MR and CT data with multiobserver delineations of organs in the pelvic area-Part of the Gold Atlas project.

    PubMed

    Nyholm, Tufve; Svensson, Stina; Andersson, Sebastian; Jonsson, Joakim; Sohlin, Maja; Gustafsson, Christian; Kjellén, Elisabeth; Söderström, Karin; Albertsson, Per; Blomqvist, Lennart; Zackrisson, Björn; Olsson, Lars E; Gunnlaugsson, Adalsteinn

    2018-03-01

    We describe a public dataset with MR and CT images of patients performed in the same position with both multiobserver and expert consensus delineations of relevant organs in the male pelvic region. The purpose was to provide means for training and validation of segmentation algorithms and methods to convert MR to CT like data, i.e., so called synthetic CT (sCT). T1- and T2-weighted MR images as well as CT data were collected for 19 patients at three different departments. Five experts delineated nine organs for each patient based on the T2-weighted MR images. An automatic method was used to fuse the delineations. Starting from each fused delineation, a consensus delineation was agreed upon by the five experts for each organ and patient. Segmentation overlap between user delineations with respect to the consensus delineations was measured to describe the spread of the collected data. Finally, an open-source software was used to create deformation vector fields describing the relation between MR and CT images to further increase the usability of the dataset. The dataset has been made publically available to be used for academic purposes, and can be accessed from https://zenodo.org/record/583096. The dataset provides a useful source for training and validation of segmentation algorithms as well as methods to convert MR to CT-like data (sCT). To give some examples: The T2-weighted MR images with their consensus delineations can directly be used as a template in an existing atlas-based segmentation engine; the expert delineations are useful to validate the performance of a segmentation algorithm as they provide a way to measure variability among users which can be compared with the result of an automatic segmentation; and the pairwise deformably registered MR and CT images can be a source for an atlas-based sCT algorithm or for validation of sCT algorithm. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  12. KSC-08pd3571

    NASA Image and Video Library

    2008-11-06

    CAPE CANAVERAL, Fla. – Inside the Vehicle Assembly Building high bay 4 at NASA's Kennedy Space Center in Florida, Ares I-X upper stage simulator segments are lined up. Their protective blue shrink-wrapped covers used for shipping are being removed, as seen on the segments at left and in the back. The upper stage simulator will be used in the test flight identified as Ares I-X in 2009. The segments will simulate the mass and the outer mold line and will be more than 100 feet of the total vehicle height of 327 feet. The simulator comprises 11 segments that are approximately 18 feet in diameter. Most of the segments will be approximately 10 feet high, ranging in weight from 18,000 to 60,000 pounds, for a total of approximately 450,000 pounds. Photo credit: NASA/Troy Cryder

  13. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  14. Is channel segmentation necessary to reach a multiethnic population with weight-related health promotion? An analysis of use and perception of communication channels

    PubMed Central

    Nierkens, Vera; Cremer, Stephan W.; Verhoeff, Arnoud; Stronks, Karien

    2014-01-01

    Objective To explore similarities and differences in the use and perception of communication channels to access weight-related health promotion among women in three ethnic minority groups. The ultimate aim was to determine whether similar channels might reach ethnic minority women in general or whether segmentation to ethnic groups would be required. Design Eight ethnically homogeneous focus groups were conducted among 48 women of Ghanaian, Antillean/Aruban, or Afro-Surinamese background living in Amsterdam. Our questions concerned which communication channels they usually used to access weight-related health advice or information about programs and whose information they most valued. The content analysis of data was performed. Results The participants mentioned four channels – regular and traditional healthcare, general or ethnically specific media, multiethnic and ethnic gatherings, and interpersonal communication with peers in the Netherlands and with people in the home country. Ghanaian women emphasized ethnically specific channels (e.g., traditional healthcare, Ghanaian churches). They were comfortable with these channels and trusted them. They mentioned fewer general channels – mainly limited to healthcare – and if discussed, negative perceptions were expressed. Antillean women mentioned the use of ethnically specific channels (e.g., communication with Antilleans in the home country) on balance with general audience–oriented channels (e.g., regular healthcare). Perceptions were mixed. Surinamese participants discussed, in a positive manner, the use of general audience–oriented channels, while they said they did not use traditional healthcare or advice from Surinam. Local language proficiency, time resided in the Netherlands, and approaches and messages received seemed to explain channel use and perception. Conclusion The predominant differences in channel use and perception among the ethnic groups indicate a need for channel segmentation to reach a multiethnic target group with weight-related health promotion. The study results reveal possible segmentation criteria besides ethnicity, such as local language proficiency and time since migration, worthy of further investigation. PMID:24750018

  15. Is channel segmentation necessary to reach a multiethnic population with weight-related health promotion? An analysis of use and perception of communication channels.

    PubMed

    Hartman, Marieke A; Nierkens, Vera; Cremer, Stephan W; Verhoeff, Arnoud; Stronks, Karien

    2015-01-01

    To explore similarities and differences in the use and perception of communication channels to access weight-related health promotion among women in three ethnic minority groups. The ultimate aim was to determine whether similar channels might reach ethnic minority women in general or whether segmentation to ethnic groups would be required. Eight ethnically homogeneous focus groups were conducted among 48 women of Ghanaian, Antillean/Aruban, or Afro-Surinamese background living in Amsterdam. Our questions concerned which communication channels they usually used to access weight-related health advice or information about programs and whose information they most valued. The content analysis of data was performed. The participants mentioned four channels - regular and traditional health care, general or ethnically specific media, multiethnic and ethnic gatherings, and interpersonal communication with peers in the Netherlands and with people in the home country. Ghanaian women emphasized ethnically specific channels (e.g., traditional health care, Ghanaian churches). They were comfortable with these channels and trusted them. They mentioned fewer general channels - mainly limited to health care - and if discussed, negative perceptions were expressed. Antillean women mentioned the use of ethnically specific channels (e.g., communication with Antilleans in the home country) on balance with general audience-oriented channels (e.g., regular health care). Perceptions were mixed. Surinamese participants discussed, in a positive manner, the use of general audience-oriented channels, while they said they did not use traditional health care or advice from Surinam. Local language proficiency, time resided in the Netherlands, and approaches and messages received seemed to explain channel use and perception. The predominant differences in channel use and perception among the ethnic groups indicate a need for channel segmentation to reach a multiethnic target group with weight-related health promotion. The study results reveal possible segmentation criteria besides ethnicity, such as local language proficiency and time since migration, worthy of further investigation.

  16. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    PubMed

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  17. Robustness analysis of superpixel algorithms to image blur, additive Gaussian noise, and impulse noise

    NASA Astrophysics Data System (ADS)

    Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming

    2017-11-01

    Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.

  18. High Quality Facade Segmentation Based on Structured Random Forest, Region Proposal Network and Rectangular Fitting

    NASA Astrophysics Data System (ADS)

    Rahmani, K.; Mayer, H.

    2018-05-01

    In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF), Region Proposal Network (RPN) based on a Convolutional Neural Network (CNN) as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.

  19. Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.

    PubMed

    Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei

    2015-01-01

    Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.

  20. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  1. Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging

    NASA Astrophysics Data System (ADS)

    Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping

    2011-03-01

    In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.

  2. SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M

    Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less

  3. A novel fully automatic multilevel thresholding technique based on optimized intuitionistic fuzzy sets and tsallis entropy for MR brain tumor image segmentation.

    PubMed

    Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita

    2018-03-01

    In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.

  4. Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.

    PubMed

    Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel

    2018-08-01

    Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  6. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio

    2015-04-15

    Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less

  7. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  8. Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.

    PubMed

    Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen

    2008-02-01

    A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.

  9. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  10. Progressive multi-atlas label fusion by dictionary evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1992-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.

  12. Sizing Analysis for Aircraft Utilizing Hybrid-Electric Propulsion Systems

    DTIC Science & Technology

    2011-03-18

    of the Air Force, Robert Gates, reports that since 5 the beginning of the war “the Air Force has significantly expanded its ISR capability” and...aircraft. A popular source for aircraft designers has been Daniel P. Raymer’s book Aircraft Design: A Conceptual Approach [17]. Raymer has presented a...more thought was needed to estimate takeoff weight. Using the fuel weight that burns during mission segments, Raymer defined fuel weight fractions

  13. Accuracy of DXA scanning of the thoracic spine: cadaveric studies comparing BMC, areal BMD and geometric estimates of volumetric BMD against ash weight and CT measures of bone volume.

    PubMed

    Sran, Meena M; Khan, Karim M; Keiver, Kathy; Chew, Jason B; McKay, Heather A; Oxland, Thomas R

    2005-12-01

    Biomechanical studies of the thoracic spine often scan cadaveric segments by dual energy X-ray absorptiometry (DXA) to obtain measures of bone mass. Only one study has reported the accuracy of lateral scans of thoracic vertebral bodies. The accuracy of DXA scans of thoracic spine segments and of anterior-posterior (AP) thoracic scans has not been investigated. We have examined the accuracy of AP and lateral thoracic DXA scans by comparison with ash weight, the gold-standard for measuring bone mineral content (BMC). We have also compared three methods of estimating volumetric bone mineral density (vBMD) with a novel standard-ash weight (g)/bone volume (cm3) as measured by computed tomography (CT). Twelve T5-T8 spine segments were scanned with DXA (AP and lateral) and CT. The T6 vertebrae were excised, the posterior elements removed and then the vertebral bodies were ashed in a muffle furnace. We proposed a new method of estimating vBMD and compared it with two previously published methods. BMC values from lateral DXA scans displayed the strongest correlation with ash weight (r=0.99) and were on average 12.8% higher (p<0.001). As expected, BMC (AP or lateral) was more strongly correlated with ash weight than areal bone mineral density (aBMD; AP: r=0.54, or lateral: r=0.71) or estimated vBMD. Estimates of vBMD with either of the three methods were strongly and similarly correlated with volumetric BMD calculated by dividing ash weight by CT-derived volume. These data suggest that readily available DXA scanning is an appropriate surrogate measure for thoracic spine bone mineral and that the lateral scan might be the scan method of choice.

  14. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  16. KSC-08pd3565

    NASA Image and Video Library

    2008-11-06

    CAPE CANAVERAL, Fla. – Inside the Vehicle Assembly Building high bay 4 at NASA's Kennedy Space Center in Florida, these Ares I-X upper stage simulator segments have shed their protective blue shrink-wrapped covers used for shipping. The upper stage simulator will be used in the test flight identified as Ares I-X in 2009. The segments will simulate the mass and the outer mold line and will be more than 100 feet of the total vehicle height of 327 feet. The simulator comprises 11 segments that are approximately 18 feet in diameter. Most of the segments will be approximately 10 feet high, ranging in weight from 18,000 to 60,000 pounds, for a total of approximately 450,000 pounds. Photo credit: NASA/Troy Cryder

  17. KSC-08pd3564

    NASA Image and Video Library

    2008-11-06

    CAPE CANAVERAL, Fla. – Inside the Vehicle Assembly Building high bay 4 at NASA's Kennedy Space Center in Florida, these Ares I-X upper stage simulator segments have shed their protective blue shrink-wrapped covers used for shipping. The upper stage simulator will be used in the test flight identified as Ares I-X in 2009. The segments will simulate the mass and the outer mold line and will be more than 100 feet of the total vehicle height of 327 feet. The simulator comprises 11 segments that are approximately 18 feet in diameter. Most of the segments will be approximately 10 feet high, ranging in weight from 18,000 to 60,000 pounds, for a total of approximately 450,000 pounds. Photo credit: NASA/Troy Cryder

  18. KSC-08pd3570

    NASA Image and Video Library

    2008-11-06

    CAPE CANAVERAL, Fla. – Inside the Vehicle Assembly Building high bay 4 at NASA's Kennedy Space Center in Florida, these Ares I-X upper stage simulator segments have shed their protective blue shrink-wrapped covers used for shipping. The upper stage simulator will be used in the test flight identified as Ares I-X in 2009. The segments will simulate the mass and the outer mold line and will be more than 100 feet of the total vehicle height of 327 feet. The simulator comprises 11 segments that are approximately 18 feet in diameter. Most of the segments will be approximately 10 feet high, ranging in weight from 18,000 to 60,000 pounds, for a total of approximately 450,000 pounds. Photo credit: NASA/Troy Cryder

  19. Object-based delineation and classification of alluvial fans by application of mean-shift segmentation and support vector machines

    NASA Astrophysics Data System (ADS)

    Pipaud, Isabel; Lehmkuhl, Frank

    2017-09-01

    In the field of geomorphology, automated extraction and classification of landforms is one of the most active research areas. Until the late 2000s, this task has primarily been tackled using pixel-based approaches. As these methods consider pixels and pixel neighborhoods as the sole basic entities for analysis, they cannot account for the irregular boundaries of real-world objects. Object-based analysis frameworks emerging from the field of remote sensing have been proposed as an alternative approach, and were successfully applied in case studies falling in the domains of both general and specific geomorphology. In this context, the a-priori selection of scale parameters or bandwidths is crucial for the segmentation result, because inappropriate parametrization will either result in over-segmentation or insufficient segmentation. In this study, we describe a novel supervised method for delineation and classification of alluvial fans, and assess its applicability using a SRTM 1‧‧ DEM scene depicting a section of the north-eastern Mongolian Altai, located in northwest Mongolia. The approach is premised on the application of mean-shift segmentation and the use of a one-class support vector machine (SVM) for classification. To consider variability in terms of alluvial fan dimension and shape, segmentation is performed repeatedly for different weightings of the incorporated morphometric parameters as well as different segmentation bandwidths. The final classification layer is obtained by selecting, for each real-world object, the most appropriate segmentation result according to fuzzy membership values derived from the SVM classification. Our results show that mean-shift segmentation and SVM-based classification provide an effective framework for delineation and classification of a particular landform. Variable bandwidths and terrain parameter weightings were identified as being crucial for consideration of intra-class variability, and, in turn, for a constantly high segmentation quality. Our analysis further reveals that incorporation of morphometric parameters quantifying specific morphological aspects of a landform is indispensable for developing an accurate classification scheme. Alluvial fans exhibiting accentuated composite morphologies were identified as a major challenge for automatic delineation, as they cannot be fully captured by a single segmentation run. There is, however, a high probability that this shortcoming can be overcome by enhancing the presented approach with a routine merging fan sub-entities based on their spatial relationships.

  20. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

Top