Sample records for segmented regression model

  1. Locally-constrained Boundary Regression for Segmentation of Prostate and Rectum in the Planning CT Images

    PubMed Central

    Shao, Yeqin; Gao, Yaozong; Wang, Qian; Yang, Xin; Shen, Dinggang

    2015-01-01

    Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: 1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; 2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; 3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance. PMID:26439938

  2. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  3. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    NASA Astrophysics Data System (ADS)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  4. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  5. Accurate Segmentation of CT Male Pelvic Organs via Regression-based Deformable Models and Multi-task Random Forests

    PubMed Central

    Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z.; Chen, Ronald C.

    2016-01-01

    Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a nonlocal external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531

  6. Estimates of Median Flows for Streams on the 1999 Kansas Surface Water Register

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    The Kansas State Legislature, by enacting Kansas Statute KSA 82a?2001 et. seq., mandated the criteria for determining which Kansas stream segments would be subject to classification by the State. One criterion for the selection as a classified stream segment is based on the statistic of median flow being equal to or greater than 1 cubic foot per second. As specified by KSA 82a?2001 et. seq., median flows were determined from U.S. Geological Survey streamflow-gaging-station data by using the most-recent 10 years of gaged data (KSA) for each streamflow-gaging station. Median flows also were determined by using gaged data from the entire period of record (all-available hydrology, AAH). Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating median flows for uncontrolled stream segments. The drainage area of the gaging stations on uncontrolled stream segments used in the regression analyses ranged from 2.06 to 12,004 square miles. A logarithmic transformation of the data was needed to develop the best linear relation for computing median flows. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. Tobit analyses of KSA data yielded a model standard error of prediction of 0.285 logarithmic units, and the best equations using Tobit analyses of AAH data had a model standard error of prediction of 0.250 logarithmic units. These regression equations and an interpolation procedure were used to compute median flows for the uncontrolled stream segments on the 1999 Kansas Surface Water Register. Measured median flows from gaging stations were incorporated into the regression-estimated median flows along the stream segments where available. The segments that were uncontrolled were interpolated using gaged data weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled segments of Kansas streams, the median flow information was interpolated between gaging stations using only gaged data weighted by drainage area. Of the 2,232 total stream segments on the Kansas Surface Water Register, 34.5 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second when the KSA analysis was used. When the AAH analysis was used, 36.2 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second. This report supercedes U.S. Geological Survey Water-Resources Investigations Report 02?4292.

  7. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  8. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  9. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    PubMed

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  10. An Intelligent Decision Support System for Workforce Forecast

    DTIC Science & Technology

    2011-01-01

    ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models

  11. Estimation of stature from the foot and its segments in a sub-adult female population of North India

    PubMed Central

    2011-01-01

    Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433

  12. Estimation of stature from the foot and its segments in a sub-adult female population of North India.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam

    2011-11-21

    Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.

  13. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  14. Capturing the sensitivity of land-use regression models to short-term mobile monitoring campaigns using air pollution micro-sensors.

    PubMed

    Minet, L; Gehr, R; Hatzopoulou, M

    2017-11-01

    The development of reliable measures of exposure to traffic-related air pollution is crucial for the evaluation of the health effects of transportation. Land-use regression (LUR) techniques have been widely used for the development of exposure surfaces, however these surfaces are often highly sensitive to the data collected. With the rise of inexpensive air pollution sensors paired with GPS devices, we witness the emergence of mobile data collection protocols. For the same urban area, can we achieve a 'universal' model irrespective of the number of locations and sampling visits? Can we trade the temporal representation of fixed-point sampling for a larger spatial extent afforded by mobile monitoring? This study highlights the challenges of short-term mobile sampling campaigns in terms of the resulting exposure surfaces. A mobile monitoring campaign was conducted in 2015 in Montreal; nitrogen dioxide (NO 2 ) levels at 1395 road segments were measured under repeated visits. We developed LUR models based on sub-segments, categorized in terms of the number of visits per road segment. We observe that LUR models were highly sensitive to the number of road segments and to the number of visits per road segment. The associated exposure surfaces were also highly dissimilar. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  16. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  17. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  18. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  19. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  20. A spline-based regression parameter set for creating customized DARTEL MRI brain templates from infancy to old age.

    PubMed

    Wilke, Marko

    2018-02-01

    This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.

  1. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  2. Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye

    PubMed Central

    Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael

    2017-01-01

    Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847

  3. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. 3D Multi-segment foot kinematics in children: A developmental study in typically developing boys.

    PubMed

    Deschamps, Kevin; Staes, Filip; Peerlinck, Kathelijne; Van Geet, Christel; Hermans, Cedric; Matricali, Giovanni Arnoldo; Lobet, Sebastien

    2017-02-01

    The relationship between age and 3D rotations objectivized with multisegment foot models has not been quantified until now. The purpose of this study was therefore to investigate the relationship between age and multi-segment foot kinematics in a cross-sectional database. Barefoot multi-segment foot kinematics of thirty two typically developing boys, aged 6-20 years, were captured with the Rizzoli Multi-segment Foot Model. One-dimensional statistical parametric mapping linear regression was used to examine the relationship between age and 3D inter-segment rotations of the dominant leg during the full gait cycle. Age was significantly correlated with sagittal plane kinematics of the midfoot and the calcaneus-metatarsus inter-segment angle (p<0.0125). Age was also correlated with the transverse plane kinematics of the calcaneus-metatarsus angle (p<0.0001). Gait labs should consider age related differences and variability if optimal decision making is pursued. It remains unclear if this is of interest for all foot models, however, the current study highlights that this is of particular relevance for foot models which incorporate a separate midfoot segment. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Evaluating Spatial Variability in Sediment and Phosphorus Concentration-Discharge Relationships Using Bayesian Inference and Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.

    2017-12-01

    Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.

  6. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  7. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  8. Beef quality grading using machine vision

    NASA Astrophysics Data System (ADS)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  9. Digital data used to relate nutrient inputs to water quality in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Brakebill, John W.; Preston, Stephen D.

    1999-01-01

    Digital data sets were compiled by the U. S. Geological Survey (USGS) and used as input for a collection of Spatially Referenced Regressions On Watershed attributes for the Chesapeake Bay region. These regressions relate streamwater loads to nutrient sources and the factors that affect the transport of these nutrients throughout the watershed. A digital segmented network based on watershed boundaries serves as the primary foundation for spatially referencing total nitrogen and total phosphorus source and land-surface characteristic data sets within a Geographic Information System. Digital data sets of atmospheric wet deposition of nitrate, point-source discharge locations, land cover, and agricultural sources such as fertilizer and manure were created and compiled from numerous sources and represent nitrogen and phosphorus inputs. Some land-surface characteristics representing factors that affect the transport of nutrients include land use, land cover, average annual precipitation and temperature, slope, and soil permeability. Nutrient input and land-surface characteristic data sets merged with the segmented watershed network provide the spatial detail by watershed segment required by the models. Nutrient stream loads were estimated for total nitrogen, total phosphorus, nitrate/nitrite, amonium, phosphate, and total suspended soilds at as many as 109 sites within the Chesapeake Bay watershed. The total nitrogen and total phosphorus load estimates are the dependent variables for the regressions and were used for model calibration. Other nutrient-load estimates may be used for calibration in future applications of the models.

  10. Interrupted time series regression for the evaluation of public health interventions: a tutorial.

    PubMed

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-02-01

    Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.

  11. Interrupted time series regression for the evaluation of public health interventions: a tutorial

    PubMed Central

    Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio

    2017-01-01

    Abstract Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design. PMID:27283160

  12. Contribution of calcaneal and leg segment rotations to ankle joint dorsiflexion in a weight-bearing task.

    PubMed

    Chizewski, Michael G; Chiu, Loren Z F

    2012-05-01

    Joint angle is the relative rotation between two segments where one is a reference and assumed to be non-moving. However, rotation of the reference segment will influence the system's spatial orientation and joint angle. The purpose of this investigation was to determine the contribution of leg and calcaneal rotations to ankle rotation in a weight-bearing task. Forty-eight individuals performed partial squats recorded using a 3D motion capture system. Markers on the calcaneus and leg were used to model leg and calcaneal segment, and ankle joint rotations. Multiple linear regression was used to determine the contribution of leg and calcaneal segment rotations to ankle joint dorsiflexion. Regression models for left (R(2)=0.97) and right (R(2)=0.97) ankle dorsiflexion were significant. Sagittal plane leg rotation had a positive influence (left: β=1.411; right: β=1.418) while sagittal plane calcaneal rotation had a negative influence (left: β=-0.573; right: β=-0.650) on ankle dorsiflexion. Sagittal plane rotations of the leg and calcaneus were positively correlated (left: r=0.84, P<0.001; right: r=0.80, P<0.001). During a partial squat, the calcaneus rotates forward. Simultaneous forward calcaneal rotation with ankle dorsiflexion reduces total ankle dorsiflexion angle. Rear foot posture is reoriented during a partial squat, allowing greater leg rotation in the sagittal plane. Segment rotations may provide greater insight into movement mechanics that cannot be explained via joint rotations alone. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. The effects of segmentation algorithms on the measurement of 18F-FDG PET texture parameters in non-small cell lung cancer.

    PubMed

    Bashir, Usman; Azad, Gurdip; Siddique, Muhammad Musib; Dhillon, Saana; Patel, Nikheel; Bassett, Paul; Landau, David; Goh, Vicky; Cook, Gary

    2017-12-01

    Measures of tumour heterogeneity derived from 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) scans are increasingly reported as potential biomarkers of non-small cell lung cancer (NSCLC) for classification and prognostication. Several segmentation algorithms have been used to delineate tumours, but their effects on the reproducibility and predictive and prognostic capability of derived parameters have not been evaluated. The purpose of our study was to retrospectively compare various segmentation algorithms in terms of inter-observer reproducibility and prognostic capability of texture parameters derived from non-small cell lung cancer (NSCLC) 18 F-FDG PET/CT images. Fifty three NSCLC patients (mean age 65.8 years; 31 males) underwent pre-chemoradiotherapy 18 F-FDG PET/CT scans. Three readers segmented tumours using freehand (FH), 40% of maximum intensity threshold (40P), and fuzzy locally adaptive Bayesian (FLAB) algorithms. Intraclass correlation coefficient (ICC) was used to measure the inter-observer variability of the texture features derived by the three segmentation algorithms. Univariate cox regression was used on 12 commonly reported texture features to predict overall survival (OS) for each segmentation algorithm. Model quality was compared across segmentation algorithms using Akaike information criterion (AIC). 40P was the most reproducible algorithm (median ICC 0.9; interquartile range [IQR] 0.85-0.92) compared with FLAB (median ICC 0.83; IQR 0.77-0.86) and FH (median ICC 0.77; IQR 0.7-0.85). On univariate cox regression analysis, 40P found 2 out of 12 variables, i.e. first-order entropy and grey-level co-occurence matrix (GLCM) entropy, to be significantly associated with OS; FH and FLAB found 1, i.e., first-order entropy. For each tested variable, survival models for all three segmentation algorithms were of similar quality, exhibiting comparable AIC values with overlapping 95% CIs. Compared with both FLAB and FH, segmentation with 40P yields superior inter-observer reproducibility of texture features. Survival models generated by all three segmentation algorithms are of at least equivalent utility. Our findings suggest that a segmentation algorithm using a 40% of maximum threshold is acceptable for texture analysis of 18 F-FDG PET in NSCLC.

  14. Segmentation of optic disc and optic cup in retinal fundus images using shape regression.

    PubMed

    Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil

    2016-08-01

    Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.

  15. Evaluation of energy consumption during aerobic sewage sludge treatment in dairy wastewater treatment plant.

    PubMed

    Dąbrowski, Wojciech; Żyłka, Radosław; Malinowski, Paweł

    2017-02-01

    The subject of the research conducted in an operating dairy wastewater treatment plant (WWTP) was to examine electric energy consumption during sewage sludge treatment. The excess sewage sludge was aerobically stabilized and dewatered with a screw press. Organic matter varied from 48% to 56% in sludge after stabilization and dewatering. It proves that sludge was properly stabilized and it was possible to apply it as a fertilizer. Measurement factors for electric energy consumption for mechanically dewatered sewage sludge were determined, which ranged between 0.94 and 1.5 kWhm -3 with the average value at 1.17 kWhm -3 . The shares of devices used for sludge dewatering and aerobic stabilization in the total energy consumption of the plant were also established, which were 3% and 25% respectively. A model of energy consumption during sewage sludge treatment was estimated according to experimental data. Two models were applied: linear regression for dewatering process and segmented linear regression for aerobic stabilization. The segmented linear regression model was also applied to total energy consumption during sewage sludge treatment in the examined dairy WWTP. The research constitutes an introduction for further studies on defining a mathematical model used to optimize electric energy consumption by dairy WWTPs. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Fish habitat regression under water scarcity scenarios in the Douro River basin

    NASA Astrophysics Data System (ADS)

    Segurado, Pedro; Jauch, Eduardo; Neves, Ramiro; Ferreira, Teresa

    2015-04-01

    Climate change will predictably alter hydrological patterns and processes at the catchment scale, with impacts on habitat conditions for fish. The main goals of this study are to identify the stream reaches that will undergo more pronounced flow reduction under different climate change scenarios and to assess which fish species will be more affected by the consequent regression of suitable habitats. The interplay between changes in flow and temperature and the presence of transversal artificial obstacles (dams and weirs) is analysed. The results will contribute to river management and impact mitigation actions under climate change. This study was carried out in the Tâmega catchment of the Douro basin. A set of 29 Hydrological, climatic, and hydrogeomorphological variables were modelled using a water modelling system (MOHID), based on meteorological data recorded monthly between 2008 and 2014. The same variables were modelled considering future climate change scenarios. The resulting variables were used in empirical habitat models of a set of key species (brown trout Salmo trutta fario, barbell Barbus bocagei, and nase Pseudochondrostoma duriense) using boosted regression trees. The stream segments between tributaries were used as spatial sampling units. Models were developed for the whole Douro basin using 401 fish sampling sites, although the modelled probabilities of species occurrence for each stream segment were predicted only for the Tâmega catchment. These probabilities of occurrence were used to classify stream segments into suitable and unsuitable habitat for each fish species, considering the future climate change scenario. The stream reaches that were predicted to undergo longer flow interruptions were identified and crossed with the resulting predictive maps of habitat suitability to compute the total area of habitat loss per species. Among the target species, the brown trout was predicted to be the most sensitive to habitat regression due to the interplay of flow reduction, increase of temperature and transversal barriers. This species is therefore a good indicator of climate change impacts in rivers and therefore we recommend using this species as a target of monitoring programs to be implemented in the context of climate change adaptation strategies.

  17. Dynamic Parameter Identification of Subject-Specific Body Segment Parameters Using Robotics Formalism: Case Study Head Complex.

    PubMed

    Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente

    2016-05-01

    Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.

  18. An Example-Based Brain MRI Simulation Framework.

    PubMed

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L

    2015-02-21

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  19. Space shuttle propulsion parameter estimation using optional estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.

  20. Fresh and Dry Mass Estimates of Hermetia illucens (Linnaeus, 1758) (Diptera: Stratiomyidae) Larvae Associated with Swine Decomposition in Urban Area of Central Amazonia.

    PubMed

    Barros, L M; Martins, R T; Ferreira-Keppler, R L; Gutjahr, A L N

    2017-08-04

    Information on biomass is substantial for calculating growth rates and may be employed in the medicolegal and economic importance of Hermetia illucens (Linnaeus, 1758). Although biomass is essential to understanding many ecological processes, it is not easily measured. Biomass may be determined by directly weighing or indirectly through regression models of fresh/dry mass versus body dimensions. In this study, we evaluated the association between morphometry and fresh/dry mass of immature H. illucens using linear, exponential, and power regression models. We measured width and length of the cephalic capsule, overall body length, and width of the largest abdominal segment of 280 larvae. Overall body length and width of the largest abdominal segment were the best predictors for biomass. Exponential models best fitted body dimensions and biomass (both fresh and dry), followed by power and linear models. In all models, fresh and dry biomass were strongly correlated (>75%). Values estimated by the models did not differ from observed ones, and prediction power varied from 27 to 79%. Accordingly, the correspondence between biomass and body dimensions should facilitate and motivate the development of applied studies involving H. illucens in the Amazon region.

  1. Body shape changes in the elderly and the influence of density assumptions on segment inertia parameters

    NASA Astrophysics Data System (ADS)

    Jensen, Robert K.; Fletcher, P.; Abraham, C.

    1991-04-01

    The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.

  2. A new model for the determination of limb segment mass in children.

    PubMed

    Kuemmerle-Deschner, J B; Hansmann, S; Rapp, H; Dannecker, G E

    2007-04-01

    The knowledge of limb segment masses is critical for the calculation of joint torques. Several methods for segment mass estimation have been described in the literature. They are either inaccurate or not applicable to the limb segments of children. Therefore, we developed a new cylinder brick model (CBM) to estimate segment mass in children. The aim of this study was to compare CBM and a model based on a polynomial regression equation (PRE) to volume measurement obtained by the water displacement method (WDM). We examined forearms, hands, lower legs, and feet of 121 children using CBM, PRE, and WDM. The differences between CBM and WDM or PRE and WDM were calculated and compared using a Bland-Altman plot of differences. Absolute limb segment mass measured by WDM ranged from 0.16+/-0.04 kg for hands in girls 5-6 years old, up to 2.72+/-1.03 kg for legs in girls 11-12 years old. The differences of normalised segment masses ranged from 0.0002+/-0.0021 to 0.0011+/-0.0036 for CBM-WDM and from 0.0023+/-0.0041 to 0.0127+/-0.036 for PRE-WDM (values are mean+/-2 S.D.). The CBM showed better agreement with WDM than PRE for all limb segments in girls and boys. CBM is accurate and superior to PRE for the estimation of individual limb segment mass of children. Therefore, CBM is a practical and useful tool for the analysis of kinetic parameters and the calculation of resulting forces to assess joint functionality in children.

  3. Estimation of carbon storage based on individual tree detection in Pinus densiflora stands using a fusion of aerial photography and LiDAR data.

    PubMed

    Kim, So-Ra; Kwak, Doo-Ahn; Lee, Woo-Kyun; oLee, Woo-Kyun; Son, Yowhan; Bae, Sang-Won; Kim, Choonsig; Yoo, Seongjin

    2010-07-01

    The objective of this study was to estimate the carbon storage capacity of Pinus densiflora stands using remotely sensed data by combining digital aerial photography with light detection and ranging (LiDAR) data. A digital canopy model (DCM), generated from the LiDAR data, was combined with aerial photography for segmenting crowns of individual trees. To eliminate errors in over and under-segmentation, the combined image was smoothed using a Gaussian filtering method. The processed image was then segmented into individual trees using a marker-controlled watershed segmentation method. After measuring the crown area from the segmented individual trees, the individual tree diameter at breast height (DBH) was estimated using a regression function developed from the relationship observed between the field-measured DBH and crown area. The above ground biomass of individual trees could be calculated by an image-derived DBH using a regression function developed by the Korea Forest Research Institute. The carbon storage, based on individual trees, was estimated by simple multiplication using the carbon conversion index (0.5), as suggested in guidelines from the Intergovernmental Panel on Climate Change. The mean carbon storage per individual tree was estimated and then compared with the field-measured value. This study suggested that the biomass and carbon storage in a large forest area can be effectively estimated using aerial photographs and LiDAR data.

  4. Dental measurements and Bolton index reliability and accuracy obtained from 2D digital, 3D segmented CBCT, and 3d intraoral laser scanner

    PubMed Central

    San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel

    2017-01-01

    Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764

  5. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  6. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  7. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  8. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  9. Advances in segmentation modeling for health communication and social marketing campaigns.

    PubMed

    Albrecht, T L; Bryant, C

    1996-01-01

    Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.

  10. Safety analysis of urban arterials at the meso level.

    PubMed

    Li, Jia; Wang, Xuesong

    2017-11-01

    Urban arterials form the main structure of street networks. They typically have multiple lanes, high traffic volume, and high crash frequency. Classical crash prediction models investigate the relationship between arterial characteristics and traffic safety by treating road segments and intersections as isolated units. This micro-level analysis does not work when examining urban arterial crashes because signal spacing is typically short for urban arterials, and there are interactions between intersections and road segments that classical models do not accommodate. Signal spacing also has safety effects on both intersections and road segments that classical models cannot fully account for because they allocate crashes separately to intersections and road segments. In addition, classical models do not consider the impact on arterial safety of the immediately surrounding street network pattern. This study proposes a new modeling methodology that will offer an integrated treatment of intersections and road segments by combining signalized intersections and their adjacent road segments into a single unit based on road geometric design characteristics and operational conditions. These are called meso-level units because they offer an analytical approach between micro and macro. The safety effects of signal spacing and street network pattern were estimated for this study based on 118 meso-level units obtained from 21 urban arterials in Shanghai, and were examined using CAR (conditional auto regressive) models that corrected for spatial correlation among the units within individual arterials. Results showed shorter arterial signal spacing was associated with higher total and PDO (property damage only) crashes, while arterials with a greater number of parallel roads were associated with lower total, PDO, and injury crashes. The findings from this study can be used in the traffic safety planning, design, and management of urban arterials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Race and Unemployment: Labor Market Experiences of Black and White Men, 1968-1988.

    ERIC Educational Resources Information Center

    Wilson, Franklin D.; And Others

    1995-01-01

    Estimation of multinomial logistic regression models on a sample of unemployed workers suggested that persistently higher black unemployment is due to differential access to employment opportunities by region, occupational placement, labor market segmentation, and discrimination. The racial gap in unemployment is greatest for college-educated…

  12. Exploring unobserved heterogeneity in bicyclists' red-light running behaviors at different crossing facilities.

    PubMed

    Guo, Yanyong; Li, Zhibin; Wu, Yao; Xu, Chengcheng

    2018-06-01

    Bicyclists running the red light at crossing facilities increase the potential of colliding with motor vehicles. Exploring the contributing factors could improve the prediction of running red-light probability and develop countermeasures to reduce such behaviors. However, individuals could have unobserved heterogeneities in running a red light, which make the accurate prediction more challenging. Traditional models assume that factor parameters are fixed and cannot capture the varying impacts on red-light running behaviors. In this study, we employed the full Bayesian random parameters logistic regression approach to account for the unobserved heterogeneous effects. Two types of crossing facilities were considered which were the signalized intersection crosswalks and the road segment crosswalks. Electric and conventional bikes were distinguished in the modeling. Data were collected from 16 crosswalks in urban area of Nanjing, China. Factors such as individual characteristics, road geometric design, environmental features, and traffic variables were examined. Model comparison indicates that the full Bayesian random parameters logistic regression approach is statistically superior to the standard logistic regression model. More red-light runners are predicted at signalized intersection crosswalks than at road segment crosswalks. Factors affecting red-light running behaviors are gender, age, bike type, road width, presence of raised median, separation width, signal type, green ratio, bike and vehicle volume, and average vehicle speed. Factors associated with the unobserved heterogeneity are gender, bike type, signal type, separation width, and bike volume. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Estimation of retinal vessel caliber using model fitting and random forests

    NASA Astrophysics Data System (ADS)

    Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio

    2017-03-01

    Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.

  14. Brain tumor segmentation based on local independent projection-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin

    2014-10-01

    Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.

  15. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.

    2018-04-01

    It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.

  17. Use of segmented constrained layer damping treatment for improved helicopter aeromechanical stability

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu

    2000-08-01

    The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.

  18. The development and evaluation of accident predictive models

    NASA Astrophysics Data System (ADS)

    Maleck, T. L.

    1980-12-01

    A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.

  19. Dreams Fulfilled, Dreams Shattered: Determinants of Segmented Assimilation in the Second Generation

    ERIC Educational Resources Information Center

    Haller, William; Portes, Alejandro; Lynch, Scott M.

    2011-01-01

    We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early…

  20. Estimation of monthly water yields and flows for 1951-2012 for the United States portion of the Great Lakes Basin with AFINCH

    USGS Publications Warehouse

    Luukkonen, Carol L.; Holtschlag, David J.; Reeves, Howard W.; Hoard, Christopher J.; Fuller, Lori M.

    2015-01-01

    Monthly water yields from 105,829 catchments and corresponding flows in 107,691 stream segments were estimated for water years 1951–2012 in the Great Lakes Basin in the United States. Both sets of estimates were computed by using the Analysis of Flows In Networks of CHannels (AFINCH) application within the NHDPlus geospatial data framework. AFINCH provides an environment to develop constrained regression models to integrate monthly streamflow and water-use data with monthly climatic data and fixed basin characteristics data available within NHDPlus or supplied by the user. For this study, the U.S. Great Lakes Basin was partitioned into seven study areas by grouping selected hydrologic subregions and adjoining cataloguing units. This report documents the regression models and data used to estimate monthly water yields and flows in each study area. Estimates of monthly water yields and flows are presented in a Web-based mapper application. Monthly flow time series for individual stream segments can be retrieved from the Web application and used to approximate monthly flow-duration characteristics and to identify possible trends.

  1. In vivo measurement of spinal column viscoelasticity--an animal model.

    PubMed

    Hult, E; Ekström, L; Kaigle, A; Holm, S; Hansson, T

    1995-01-01

    The goal of this study was to measure the in vivo viscoelastic response of spinal motion segments loaded in compression using a porcine model. Nine pigs were used in the study. The animals were anaesthetized and, using surgical techniques, four intrapedicular screws were inserted into the vertebrae of the L2-L3 motion segment. A miniaturized servohydraulic exciter capable of compressing the motion segment was mounted on to the screws. In six animals, a loading scheme consisting of 50 N and 100 N of compression, each applied for 10 min, was used. Each loading period was followed by 10 min restitution with zero load. The loading scheme was repeated four times. Three animals were examined for stiffening effects by consecutively repeating eight times 50 N loading for 5 min followed by 5 min restitution with zero load. This loading scheme was repeated using a 100 N load level. The creep-recovery behavior of the motion segment was recorded continuously. Using non-linear regression techniques, the experimental data were used for evaluating the parameters of a three-parameter standard linear solid model. Correlation coefficients of the order of 0.85 or higher were obtained for the three independent parameters of the model. A survey of the data shows that the viscous deformation rate was a function of the load level. Also, repeated loading at 100 N seemed to induce long-lasting changes in the viscoelastic properties of the porcine lumbar motion segment.

  2. Anatomy guided automated SPECT renal seed point estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  3. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Computing mammographic density from a multiple regression model constructed with image-acquisition parameters from a full-field digital mammographic unit

    PubMed Central

    Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.

    2009-01-01

    Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343

  5. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits.

    PubMed

    Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G

    2017-12-05

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.

  6. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    PubMed

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  7. Fault Diagnostics and Prognostics for Large Segmented SRMs

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry; Osipov, Viatcheslav V.; Smelyanskiy, Vadim N.; Timucin, Dogan A.; Uckun, Serdar; Hayashida, Ben; Watson, Michael; McMillin, Joshua; Shook, David; Johnson, Mont; hide

    2009-01-01

    We report progress in development of the fault diagnostic and prognostic (FD&P) system for large segmented solid rocket motors (SRMs). The model includes the following main components: (i) 1D dynamical model of internal ballistics of SRMs; (ii) surface regression model for the propellant taking into account erosive burning; (iii) model of the propellant geometry; (iv) model of the nozzle ablation; (v) model of a hole burning through in the SRM steel case. The model is verified by comparison of the spatially resolved time traces of the flow parameters obtained in simulations with the results of the simulations obtained using high-fidelity 2D FLUENT model (developed by the third party). To develop FD&P system of a case breach fault for a large segmented rocket we notice [1] that the stationary zero-dimensional approximation for the nozzle stagnation pressure is surprisingly accurate even when stagnation pressure varies significantly in time during burning tail-off. This was also found to be true for the case breach fault [2]. These results allow us to use the FD&P developed in our earlier research [3]-[6] by substituting head stagnation pressure with nozzle stagnation pressure. The axial corrections to the value of the side thrust due to the mass addition are taken into account by solving a system of ODEs in spatial dimension.

  8. Logistic Stick-Breaking Process

    PubMed Central

    Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.

    2013-01-01

    A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593

  9. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.

    PubMed

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A

    2017-07-01

    Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  11. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  12. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  13. Watershed Regressions for Pesticides (WARP) models for predicting stream concentrations of multiple pesticides

    USGS Publications Warehouse

    Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.

    2013-01-01

    Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.

  14. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  15. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  17. Segmentation and Characterization of Chewing Bouts by Monitoring Temporalis Muscle Using Smart Glasses With Piezoelectric Sensor.

    PubMed

    Farooq, Muhammad; Sazonov, Edward

    2017-11-01

    Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.

  18. Design and Development of a Model to Simulate 0-G Treadmill Running Using the European Space Agency's Subject Loading System

    NASA Technical Reports Server (NTRS)

    Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.

    2010-01-01

    Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.

  19. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NASA Astrophysics Data System (ADS)

    Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-01

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  20. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer.

    PubMed

    Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-07

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  1. Segmented regression analysis of interrupted time series data to assess outcomes of a South American road traffic alcohol policy change.

    PubMed

    Nistal-Nuño, Beatriz

    2017-09-01

    In Chile, a new law introduced in March 2012 decreased the legal blood alcohol concentration (BAC) limit for driving while impaired from 1 to 0.8 g/l and the legal BAC limit for driving under the influence of alcohol from 0.5 to 0.3 g/l. The goal is to assess the impact of this new law on mortality and morbidity outcomes in Chile. A review of national databases in Chile was conducted from January 2003 to December 2014. Segmented regression analysis of interrupted time series was used for analyzing the data. In a series of multivariable linear regression models, the change in intercept and slope in the monthly incidence rate of traffic deaths and injuries and association with alcohol per 100,000 inhabitants was estimated from pre-intervention to postintervention, while controlling for secular changes. In nested regression models, potential confounding seasonal effects were accounted for. All analyses were performed at a two-sided significance level of 0.05. Immediate level drops in all the monthly rates were observed after the law from the end of the prelaw period in the majority of models and in all the de-seasonalized models, although statistical significance was reached only in the model for injures related to alcohol. After the law, the estimated monthly rate dropped abruptly by -0.869 for injuries related to alcohol and by -0.859 adjusting for seasonality (P < 0.001). Regarding the postlaw long-term trends, it was evidenced a steeper decreasing trend after the law in the models for deaths related to alcohol, although these differences were not statistically significant. A strong evidence of a reduction in traffic injuries related to alcohol was found following the law in Chile. Although insufficient evidence was found of a statistically significant effect for the beneficial effects seen on deaths and overall injuries, potential clinically important effects cannot be ruled out. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  2. Threshold responses of Blackside Dace (Chrosomus cumberlandensis) and Kentucky Arrow Darter (Etheostoma spilotum) to stream conductivity

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Floyd, Michael; Compton, Michael; McDonald, Kenneth

    2016-01-01

    Chrosomus cumberlandensis (Blackside Dace [BSD]) and Etheostoma spilotum (Kentucky Arrow Darter [KAD]) are fish species of conservation concern due to their fragmented distributions, their low population sizes, and threats from anthropogenic stressors in the southeastern United States. We evaluated the relationship between fish abundance and stream conductivity, an index of environmental quality and potential physiological stressor. We modeled occurrence and abundance of KAD in the upper Kentucky River basin (208 samples) and BSD in the upper Cumberland River basin (294 samples) for sites sampled between 2003 and 2013. Segmented regression indicated a conductivity change-point for BSD abundance at 343 μS/cm (95% CI: 123–563 μS/cm) and for KAD abundance at 261 μS/cm (95% CI: 151–370 μS/cm). In both cases, abundances were negligible above estimated conductivity change-points. Post-hoc randomizations accounted for variance in estimated change points due to unequal sample sizes across the conductivity gradients. Boosted regression-tree analysis indicated stronger effects of conductivity than other natural and anthropogenic factors known to influence stream fishes. Boosted regression trees further indicated threshold responses of BSD and KAD occurrence to conductivity gradients in support of segmented regression results. We suggest that the observed conductivity relationship may indicate energetic limitations for insectivorous fishes due to changes in benthic macroinvertebrate community composition.

  3. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  4. Development of Land Segmentation, Stream-Reach Network, and Watersheds in Support of Hydrological Simulation Program-Fortran (HSPF) Modeling, Chesapeake Bay Watershed, and Adjacent Parts of Maryland, Delaware, and Virginia

    USGS Publications Warehouse

    Martucci, Sarah K.; Krstolic, Jennifer L.; Raffensperger, Jeff P.; Hopkins, Katherine J.

    2006-01-01

    The U.S. Geological Survey, U.S. Environmental Protection Agency Chesapeake Bay Program Office, Interstate Commission on the Potomac River Basin, Maryland Department of the Environment, Virginia Department of Conservation and Recreation, Virginia Department of Environmental Quality, and the University of Maryland Center for Environmental Science are collaborating on the Chesapeake Bay Regional Watershed Model, using Hydrological Simulation Program - FORTRAN to simulate streamflow and concentrations and loads of nutrients and sediment to Chesapeake Bay. The model will be used to provide information for resource managers. In order to establish a framework for model simulation, digital spatial datasets were created defining the discretization of the model region (including the Chesapeake Bay watershed, as well as the adjacent parts of Maryland, Delaware, and Virginia outside the watershed) into land segments, a stream-reach network, and associated watersheds. Land segmentation was based on county boundaries represented by a 1:100,000-scale digital dataset. Fifty of the 254 counties and incorporated cities in the model region were divided on the basis of physiography and topography, producing a total of 309 land segments. The stream-reach network for the Chesapeake Bay watershed part of the model region was based on the U.S. Geological Survey Chesapeake Bay SPARROW (SPAtially Referenced Regressions On Watershed attributes) model stream-reach network. Because that network was created only for the Chesapeake Bay watershed, the rest of the model region uses a 1:500,000-scale stream-reach network. Streams with mean annual streamflow of less than 100 cubic feet per second were excluded based on attributes from the dataset. Additional changes were made to enhance the data and to allow for inclusion of stream reaches with monitoring data that were not part of the original network. Thirty-meter-resolution Digital Elevation Model data were used to delineate watersheds for each stream reach. State watershed boundaries replaced the Digital Elevation Model-derived watersheds where coincident. After a number of corrections, the watersheds were coded to indicate major and minor basin, mean annual streamflow, and each watershed's unique identifier as well as that of the downstream watershed. Land segments and watersheds were intersected to create land-watershed segments for the model.

  5. Large data series: Modeling the usual to identify the unusual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, D.J.; Fedorov, V.V.; Lawkins, W.F.

    {open_quotes}Standard{close_quotes} approaches such as regression analysis, Fourier analysis, Box-Jenkins procedure, et al., which handle a data series as a whole, are not useful for very large data sets for at least two reasons. First, even with computer hardware available today, including parallel processors and storage devices, there are no effective means for manipulating and analyzing gigabyte, or larger, data files. Second, in general it can not be assumed that a very large data set is {open_quotes}stable{close_quotes} by the usual measures, like homogeneity, stationarity, and ergodicity, that standard analysis techniques require. Both reasons dictate the necessity to use {open_quotes}local{close_quotes} data analysismore » methods whereby the data is segmented and ordered, where order leads to a sense of {open_quotes}neighbor,{close_quotes} and then analyzed segment by segment. The idea of local data analysis is central to the study reported here.« less

  6. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture

    PubMed Central

    Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang

    2016-01-01

    The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176

  7. Water and solute absorption from carbohydrate-electrolyte solutions in the human proximal small intestine: a review and statistical analysis.

    PubMed

    Shi, Xiaocai; Passe, Dennis H

    2010-10-01

    The purpose of this study is to summarize water, carbohydrate (CHO), and electrolyte absorption from carbohydrate-electrolyte (CHO-E) solutions based on all of the triple-lumen-perfusion studies in humans since the early 1960s. The current statistical analysis included 30 reports from which were obtained information on water absorption, CHO absorption, total solute absorption, CHO concentration, CHO type, osmolality, sodium concentration, and sodium absorption in the different gut segments during exercise and at rest. Mean differences were assessed using independent-samples t tests. Exploratory multiple-regression analyses were conducted to create prediction models for intestinal water absorption. The factors influencing water and solute absorption are carefully evaluated and extensively discussed. The authors suggest that in the human proximal small intestine, water absorption is related to both total solute and CHO absorption; osmolality exerts various impacts on water absorption in the different segments; the multiple types of CHO in the ingested CHO-E solutions play a critical role in stimulating CHO, sodium, total solute, and water absorption; CHO concentration is negatively related to water absorption; and exercise may result in greater water absorption than rest. A potential regression model for predicting water absorption is also proposed for future research and practical application. In conclusion, water absorption in the human small intestine is influenced by osmolality, solute absorption, and the anatomical structures of gut segments. Multiple types of CHO in a CHO-E solution facilitate water absorption by stimulating CHO and solute absorption and lowering osmolality in the intestinal lumen.

  8. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    PubMed Central

    Yang, Xin; Jin, Jiaoying; Xu, Mengling; Wu, Huihui; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2013-01-01

    Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM) is developed and evaluated to outline common carotid artery (CCA) for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB) and lumen-intima-boundary (LIB) on transverse views slices from three-dimensional ultrasound (3D US) images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD) of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD) of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression. PMID:23533535

  9. Segmentation of the common carotid artery with active shape models from 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Jin, Jiaoying; He, Wanji; Yuchi, Ming; Ding, Mingyue

    2012-03-01

    Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, we develop and evaluate a new segmentation method for outlining both lumen and adventitia (inner and outer walls) of common carotid artery (CCA) from three-dimensional ultrasound (3D US) images for carotid atherosclerosis diagnosis and evaluation. The data set consists of sixty-eight, 17× 2× 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80mg atorvastain and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. We investigate the use of Active Shape Models (ASMs) to segment CCA inner and outer walls after statin therapy. The proposed method was evaluated with respect to expert manually outlined boundaries as a surrogate for ground truth. For the lumen and adventitia segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 93.6%+/- 2.6%, 91.8%+/- 3.5%, mean absolute distances (MAD) of 0.28+/- 0.17mm and 0.34 +/- 0.19mm, maximum absolute distances (MAXD) of 0.87 +/- 0.37mm and 0.74 +/- 0.49mm. The proposed algorithm took 4.4 +/- 0.6min to segment a single 3D US images, compared to 11.7+/-1.2min for manual segmentation. Therefore, the method would promote the translation of carotid 3D US to clinical care for the fast, safety and economical monitoring of the atherosclerotic disease progression and regression during therapy.

  10. Site conditions related to erosion on logging roads

    Treesearch

    R. M. Rice; J. D. McCashion

    1985-01-01

    Synopsis - Data collected from 299 road segments in northwestern California were used to develop and test a procedure for estimating and managing road-related erosion. Site conditions and the design of each segment were described by 30 variables. Equations developed using 149 of the road segments were tested on the other 150. The best multiple regression equation...

  11. Probabilistic Air Segmentation and Sparse Regression Estimated Pseudo CT for PET/MR Attenuation Correction

    PubMed Central

    Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David

    2015-01-01

    Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778

  12. A semi-nonparametric Poisson regression model for analyzing motor vehicle crash data.

    PubMed

    Ye, Xin; Wang, Ke; Zou, Yajie; Lord, Dominique

    2018-01-01

    This paper develops a semi-nonparametric Poisson regression model to analyze motor vehicle crash frequency data collected from rural multilane highway segments in California, US. Motor vehicle crash frequency on rural highway is a topic of interest in the area of transportation safety due to higher driving speeds and the resultant severity level. Unlike the traditional Negative Binomial (NB) model, the semi-nonparametric Poisson regression model can accommodate an unobserved heterogeneity following a highly flexible semi-nonparametric (SNP) distribution. Simulation experiments are conducted to demonstrate that the SNP distribution can well mimic a large family of distributions, including normal distributions, log-gamma distributions, bimodal and trimodal distributions. Empirical estimation results show that such flexibility offered by the SNP distribution can greatly improve model precision and the overall goodness-of-fit. The semi-nonparametric distribution can provide a better understanding of crash data structure through its ability to capture potential multimodality in the distribution of unobserved heterogeneity. When estimated coefficients in empirical models are compared, SNP and NB models are found to have a substantially different coefficient for the dummy variable indicating the lane width. The SNP model with better statistical performance suggests that the NB model overestimates the effect of lane width on crash frequency reduction by 83.1%.

  13. Alterations of the tunica vasculosa lentis in the rat model of retinopathy of prematurity.

    PubMed

    Favazza, Tara L; Tanimoto, Naoyuki; Munro, Robert J; Beck, Susanne C; Garcia Garrido, Marina; Seide, Christina; Sothilingam, Vithiyanjali; Hansen, Ronald M; Fulton, Anne B; Seeliger, Mathias W; Akula, James D

    2013-08-01

    To study the relationship between retinal and tunica vasculosa lentis (TVL) disease in retinopathy of prematurity (ROP). Although the clinical hallmark of ROP is abnormal retinal blood vessels, the vessels of the anterior segment, including the TVL, are also altered. ROP was induced in Long-Evans pigmented and Sprague Dawley albino rats; room-air-reared (RAR) rats served as controls. Then, fluorescein angiographic images of the TVL and retinal vessels were serially obtained with a scanning laser ophthalmoscope near the height of retinal vascular disease, ~20 days of age, and again at 30 and 64 days of age. Additionally, electroretinograms (ERGs) were obtained prior to the first imaging session. The TVL images were analyzed for percent coverage of the posterior lens. The tortuosity of the retinal arterioles was determined using Retinal Image multiScale Analysis (Gelman et al. in Invest Ophthalmol Vis Sci 46:4734-4738, 2005). In the youngest ROP rats, the TVL was dense, while in RAR rats, it was relatively sparse. By 30 days, the TVL in RAR rats had almost fully regressed, while in ROP rats, it was still pronounced. By the final test age, the TVL had completely regressed in both ROP and RAR rats. In parallel, the tortuous retinal arterioles in ROP rats resolved with increasing age. ERG components indicating postreceptoral dysfunction, the b-wave, and oscillatory potentials were attenuated in ROP rats. These findings underscore the retinal vascular abnormalities and, for the first time, show abnormal anterior segment vasculature in the rat model of ROP. There is delayed regression of the TVL in the rat model of ROP. This demonstrates that ROP is a disease of the whole eye.

  14. Alterations of the Tunica Vasculosa Lentis in the Rat Model of Retinopathy of Prematurity

    PubMed Central

    Favazza, Tara L; Tanimoto, Naoyuki; Munro, Robert J.; Beck, Susanne C.; Garrido, Marina G.; Seide, Christina; Sothilingam, Vithiyanjali; Hansen, Ronald M.; Fulton, Anne B.; Seeliger, Mathias W.; Akula, James D

    2013-01-01

    Purpose To study the relation between retinal and tunica vasculosa lentis (TVL) disease in ROP. Although the clinical hallmark of retinopathy of prematurity (ROP) is abnormal retinal blood vessels, the vessels of the anterior segment, including the TVL, are also altered. Methods ROP was induced in Long Evans pigmented and Sprague-Dawley albino rats; room-air-reared (RAR) rats served as controls. Then, fluorescein angiographic images of the TVL and retinal vessels were serially obtained with a scanning laser ophthalmoscope (SLO) near the height of retinal vascular disease, ∼20 days-of-age, and again at 30 and 64 days-of-age. Additionally, electroretinograms (ERGs) were obtained prior to the first imaging session. The TVL images were analyzed for percent coverage of the posterior lens. The tortuosity of the retinal arterioles was determined using Retinal Image multiScale Analysis (RISA; Gelman et al., 2005). Results In the youngest ROP rats, the TVL was dense, while in RAR rats, it was relatively sparse. By 30 days, the TVL in RAR rats had almost fully regressed, while in ROP rats it was still pronounced. By the final test age, the TVL had completely regressed in both ROP and RAR rats. In parallel, the tortuous retinal arterioles in ROP rats resolved with increasing age. ERG components indicating postreceptoral dysfunction, the b-wave and oscillatory potentials (OPs), were attenuated in ROP rats. Conclusions These findings underscore the retinal vascular abnormalities and, for the first time, show abnormal anterior segment vasculature in the rat model of ROP. There is delayed regression of the TVL in the rat model of ROP. This demonstrates that ROP is a disease of the whole eye. PMID:23748796

  15. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin

    2017-12-01

    Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.

  16. Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.

    PubMed

    Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman

    2018-05-16

    Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Inferring Aquifer Transmissivity from River Flow Data

    NASA Astrophysics Data System (ADS)

    Trichakis, Ioannis; Pistocchi, Alberto

    2016-04-01

    Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.

  18. Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment

    PubMed Central

    Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael

    2015-01-01

    Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460

  19. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  20. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  1. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.

    PubMed

    Cunningham, Ryan J; Harding, Peter J; Loram, Ian D

    2017-02-01

    Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.

  2. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    PubMed

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.

  3. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  4. [Transmission disequilibrium test for nonsyndromic cleft lip and palate and segment homeobox gene-1 gene].

    PubMed

    Wu, Ping-An; Li, Yun-Liang; Wu, Han-Jiang; Wang, Kai; Fan, Guo-Zheng

    2007-09-01

    To investigate the relationship between muscle segment homeobox gene-1 (MSX1) and the genetic susceptibility of nonsyndromic cleft lip and palate (NSCLP) in Hunan Hans. One microsatellite DNA marker CA repeat in MSX1 intron region was used as genetic marker. The genotypes of 387 members in 129 NSCLP nuclear family trios were analyzed by polymerase chain reaction (PCR) and denaturing polyacrylamide gel electrophoresis. Then transmission disequilibrium test (TDT) and Logistic regression analysis were used to conduct association analysis. TDT analysis confirmed that CA4 allele in CL/P and CPO groups preferentially transmitted to the affected offspring (P = 0.018, P = 0.041). Logistic regression analysis indicated that the recessive model of inheritance was supported, and CA4 itself or CA4 acting as a marker for a disease allele or haplotype was inherited in a recessive fashion (P = 0.009). MSX1 gene is associated with NSCLP, and MSX1 gene may be directly involved either in the etiology of NSCLP or in linkage disequilibrium with disease-predisposing sites.

  5. Dreams Fulfilled and Shattered: Determinants of Segmented Assimilation in the Second Generation*

    PubMed Central

    Haller, William; Portes, Alejandro; Lynch, Scott M.

    2013-01-01

    We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early adulthood on predictors measured three and six years earlier. The Children of Immigrants Longitudinal Study (CILS), used for the analysis, allows us to establish a clear temporal order among exogenous predictors and the two dependent variables. We also construct a Downward Assimilation Index (DAI), based on six indicators and regress it on the same set of predictors. Results confirm a pattern of segmented assimilation in the second generation, with a significant proportion of the sample experiencing downward assimilation. Predictors of the latter are the obverse of those of educational and occupational achievement. Significant interaction effects emerge between these predictors and early school contexts, defined by different class and racial compositions. Implications of these results for theory and policy are examined. PMID:24223437

  6. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    PubMed

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  7. Computer-aided pulmonary image analysis in small animal models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less

  8. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

  9. MIMoSA: An Automated Method for Intermodal Segmentation Analysis of Multiple Sclerosis Brain Lesions.

    PubMed

    Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T

    2018-03-08

    Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.

  10. Tobit analysis of vehicle accident rates on interstate highways.

    PubMed

    Anastasopoulos, Panagiotis Ch; Tarko, Andrew P; Mannering, Fred L

    2008-03-01

    There has been an abundance of research that has used Poisson models and its variants (negative binomial and zero-inflated models) to improve our understanding of the factors that affect accident frequencies on roadway segments. This study explores the application of an alternate method, tobit regression, by viewing vehicle accident rates directly (instead of frequencies) as a continuous variable that is left-censored at zero. Using data from vehicle accidents on Indiana interstates, the estimation results show that many factors relating to pavement condition, roadway geometrics and traffic characteristics significantly affect vehicle accident rates.

  11. Clinical Prognosis of Superior Versus Basal Segment Stage I Non-Small Cell Lung Cancer.

    PubMed

    Handa, Yoshinori; Tsutani, Yasuhiro; Tsubokawa, Norifumi; Misumi, Keizo; Hanaki, Hideaki; Miyata, Yoshihiro; Okada, Morihito

    2017-12-01

    Despite its extensive size, variations in the clinicopathologic features of tumors in the lower lobe have been little studied. The present study investigated the prognostic differences in tumors originating from the superior and basal segments of the lower lobe in patients with non-small cell lung cancer. Data of 134 patients who underwent lobectomy or segmentectomy with systematic nodal dissection for clinical stage I, radiologically solid-dominant, non-small cell lung cancer in the superior segment (n = 60) or basal segment (n = 74) between April 2007 and December 2015 were retrospectively reviewed. Factors affecting survival were assessed by the Kaplan-Meier method and Cox regression analyses. Prognosis in the superior segment group was worse than that in the basal segment group (5-year overall survival rates 62.6% versus 89.9%, p = 0.0072; and 5-year recurrence-free survival rates 54.4% versus 75.7%, p = 0.032). In multivariable Cox regression analysis, a superior segment tumor was an independent factor for poor overall survival (hazard ratio 3.33, 95% confidence interval: 1.22 to 13.5, p = 0.010) and recurrence-free survival (hazard ratio 2.90, 95% confidence interval: 1.20 to 7.00, p = 0.008). The superior segment group tended to have more pathologic mediastinal lymph node metastases than the basal segment group (15.0% versus 5.4%, p = 0.080). Tumor location was a prognostic factor for clinical stage I non-small cell lung cancer in the lower lobe. Patients with superior segment tumors had worse prognosis than patients with basal segment tumors, with more metastases in mediastinal lymph nodes. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Segmentation of common carotid artery with active appearance models from ultrasound images

    NASA Astrophysics Data System (ADS)

    Yang, Xin; He, Wanji; Fenster, Aaron; Yuchi, Ming; Ding, Mingyue

    2013-02-01

    Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, a new segmentation method is proposed and evaluated for outlining the common carotid artery (CCA) from transverse view images, which were sliced from three-dimensional ultrasound (3D US) of 1mm inter-slice distance (ISD), to support the monitoring and assessment of carotid atherosclerosis. The data set consists of forty-eight 3D US images acquired from both left and right carotid arteries of twelve patients in two time points who had carotid stenosis of 60% or more at the baseline. The 3D US data were collected at baseline and three-month follow-up, where seven treated with 80mg atorvastatin and five with placebo. The baseline manual boundaries were used for Active Appearance Models (AAM) training; while the treatment data for segmentation testing and evaluation. The segmentation results were compared with experts manually outlined boundaries, as a surrogate for ground truth, for further evaluation. For the adventitia and lumen segmentations, the algorithm yielded Dice Coefficients (DC) of 92.06%+/-2.73% and 89.67%+/-3.66%, mean absolute distances (MAD) of 0.28+/-0.18 mm and 0.22+/-0.16 mm, maximum absolute distances (MAXD) of 0.71+/-0.28 mm and 0.59+/-0.21 mm, respectively. The segmentation results were also evaluated via Pratt's figure of merit (FOM) with the value of 0.61+/-0.06 and 0.66+/-0.05, which provides a quantitative measure for judging the similarity. Experimental results indicate that the proposed method can promote the carotid 3D US usage for a fast, safe and economical monitoring of the atherosclerotic disease progression and regression during therapy.

  13. Impact of myocardial viability assessed by myocardial perfusion imaging on ventricular tachyarrhythmias in cardiac resynchronization therapy.

    PubMed

    Žižek, David; Cvijić, Marta; Ležaić, Luka; Salobir, Barbara Gužič; Zupan, Igor

    2013-12-01

    The presence of myocardial fibrosis is associated with ventricular tachyarrhythmia (VT) occurrence irrespective of cardiomyopathy etiology. The aim of our study was to evaluate the impact of global and regional viability on VTs in patients undergoing cardiac resynchronization therapy (CRT). Fifty-seven patients with advanced heart failure (age 62.3 ± 10.2; 38 men; 24 ischemic etiology) were evaluated using single-photon emission computed tomography myocardial perfusion imaging before CRT defibrillator device implantation. Global myocardial viability was determined by the number of viable segments in a 20-segment model. Regional viability was calculated as the mean tracer activity in the corresponding segments at left ventricular (LV) lead position. LV lead segments were determined at implant venography using 2 projections (left anterior oblique 30 and right anterior oblique 30) of coronary sinus tributaries. Patients were followed 30 (24-34) months for the occurrence of VTs. VTs were registered in 18 patients (31.6%). Patients without VTs had significantly more viable segments (17.6 ± 2.35 vs 14.2 ± 4.0; P = .002) and higher regional myocardial viability at LV lead position (66.1% ± 10.3% vs 54.8% ± 11.4% of tracer activity; P = .001) than those with VTs. In multivariate logistic regression models, the number of viable segments (OR = 0.66; 95% confidence interval (CI) 0.53-0.85; P = .001) and regional viability (OR = 0.90; 95% CI 0.85-0.97; P = .003) were the only independent predictors of VT occurrence. Global and regional myocardial viability are independently related to the occurrence of VTs in patients after CRT.

  14. A preliminary investigation of the relationships between historical crash and naturalistic driving.

    PubMed

    Pande, Anurag; Chand, Sai; Saxena, Neeraj; Dixit, Vinayak; Loy, James; Wolshon, Brian; Kent, Joshua D

    2017-04-01

    This paper describes a project that was undertaken using naturalistic driving data collected via Global Positioning System (GPS) devices to demonstrate a proof-of-concept for proactive safety assessments of crash-prone locations. The main hypothesis for the study is that the segments where drivers have to apply hard braking (higher jerks) more frequently might be the "unsafe" segments with more crashes over a long-term. The linear referencing methodology in ArcMap was used to link the GPS data with roadway characteristic data of US Highway 101 northbound (NB) and southbound (SB) in San Luis Obispo, California. The process used to merge GPS data with quarter-mile freeway segments for traditional crash frequency analysis is also discussed in the paper. A negative binomial regression analyses showed that proportion of high magnitude jerks while decelerating on freeway segments (from the driving data) was significantly related with the long-term crash frequency of those segments. A random parameter negative binomial model with uniformly distributed parameter for ADT and a fixed parameter for jerk provided a statistically significant estimate for quarter-mile segments. The results also indicated that roadway curvature and the presence of auxiliary lane are not significantly related with crash frequency for the highway segments under consideration. The results from this exploration are promising since the data used to derive the explanatory variable(s) can be collected using most off-the-shelf GPS devices, including many smartphones. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model

    DTIC Science & Technology

    2015-06-01

    and rewarding me with your love every day. xx THIS PAGE INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. THE UNITED STATES ARMY RECRUITING...the relationship between the calculated SAMA potential and the actual 2014 performance. The scatterplot in Figure 8 shows a strong linear... relationship between the SAMA calculated potential and the contracting achievement for 2014, with an R-squared value of 0.871. Simple Linear Regression of

  16. Geometric dimension model of virtual astronaut body for ergonomic analysis of man-machine space system

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou

    2012-07-01

    It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.

  17. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    PubMed

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  18. Multi-scale Gaussian representation and outline-learning based cell image segmentation.

    PubMed

    Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli

    2013-01-01

    High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

  19. Multi-scale Gaussian representation and outline-learning based cell image segmentation

    PubMed Central

    2013-01-01

    Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488

  20. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography.

    PubMed

    Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin

    2017-12-01

    Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. Characteristics of the road and surrounding environment in metropolitan shopping strips: association with the frequency and severity of single-vehicle crashes.

    PubMed

    Stephan, Karen L; Newstead, Stuart V

    2014-01-01

    Modeling crash risk in urban areas is more complicated than in rural areas due to the complexity of the environment and the difficulty obtaining data to fully characterize the road and surrounding environment. Knowledge of factors that impact crash risk and severity in urban areas can be used for countermeasure development and the design of risk assessment tools for practitioners. This research aimed to identify the characteristics of the road and roadside, surrounding environment, and sociodemographic factors associated with single-vehicle crash (SVC) frequency and severity in complex urban environments, namely, strip shopping center road segments. A comprehensive evidence-based list of data required for measuring the influence of the road, roadside, and other factors on crash risk was developed. The data included a broader range of factors than those traditionally considered in accident prediction models. One hundred and forty-two strip shopping segments located on arterial roads in metropolitan Melbourne, Australia, were identified. Police-reported casualty data were used to determine how many SVC occurred on the segments between 2005 and 2009. Data describing segment characteristics were collected from a diverse range of sources; for example, administrative government databases (traffic volume, speed limit, pavement condition, sociodemographic data, liquor licensing), detailed maps, on-line image sources, and digital images of arterial roads collected for the Victorian state road authority. Regression models for count data were used to identify factors associated with SVC frequency. Logistic regression was used to determine factors associated with serious and fatal outcomes. One hundred and seventy SVC occurred on the 142 selected road segments during the 5-year study period. A range of factors including traffic exposure, road cross section (curves, presence of median), road type, requirement for sharing the road with other vehicle types (trams and bicycles), roadside poles, and local amenities were associated with SVC frequency. A different set of risk factors was associated with the odds of a crash leading to a severe outcome: segment length, road cross section (curves, carriageway width), pavement condition, local amenities and vehicle, and driver factors. The presence of curves was the only factor associated with both SVC frequency and severity. A range of risk factors were associated with SVC frequency and severity in complex urban areas (metropolitan shopping strips), including traditionally studied characteristics such as traffic density and road design but also less commonly studied characteristics such as local amenities. Future behavioral research is needed to further investigate how and why these factors change the risk and severity of crashes before effective countermeasures can be developed.

  2. Characterizing the spatial distribution of ambient ultrafine particles in Toronto, Canada: A land use regression model.

    PubMed

    Weichenthal, Scott; Van Ryswyk, Keith; Goldstein, Alon; Shekarrizfard, Maryam; Hatzopoulou, Marianne

    2016-01-01

    Exposure models are needed to evaluate the chronic health effects of ambient ultrafine particles (<0.1 μm) (UFPs). We developed a land use regression model for ambient UFPs in Toronto, Canada using mobile monitoring data collected during summer/winter 2010-2011. In total, 405 road segments were included in the analysis. The final model explained 67% of the spatial variation in mean UFPs and included terms for the logarithm of distances to highways, major roads, the central business district, Pearson airport, and bus routes as well as variables for the number of on-street trees, parks, open space, and the length of bus routes within a 100 m buffer. There was no systematic difference between measured and predicted values when the model was evaluated in an external dataset, although the R(2) value decreased (R(2) = 50%). This model will be used to evaluate the chronic health effects of UFPs using population-based cohorts in the Toronto area. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  3. Regression and statistical shape model based substitute CT generation for MRI alone external beam radiation therapy from standard clinical MRI sequences.

    PubMed

    Ghose, Soumya; Greer, Peter B; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A

    2017-10-27

    In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most 'similar' to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be [Formula: see text] (mean  ±  standard deviation) for 39 patients. The 3D Gamma pass rate was [Formula: see text] (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.

  4. Regression and statistical shape model based substitute CT generation for MRI alone external beam radiation therapy from standard clinical MRI sequences

    NASA Astrophysics Data System (ADS)

    Ghose, Soumya; Greer, Peter B.; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A.

    2017-11-01

    In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most ‘similar’ to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be 0.3%+/-0.9% (mean  ±  standard deviation) for 39 patients. The 3D Gamma pass rate was 99.8+/-0.00 (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.

  5. The Impact of Policy Guidelines on Hospital Antibiotic Use over a Decade: A Segmented Time Series Analysis

    PubMed Central

    Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby

    2014-01-01

    Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as ‘Segments,’ divided based on differing modes of guideline development and implementation: Segment 1– Baseline prior to antibiotic guidelines development; Segment 2– During preparation of guidelines and booklet dissemination; Segment 3– Dormant period with no guidelines dissemination; Segment 4– Booklet dissemination of revised guidelines; Segment 5– Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18), 0.21 (SE = 0.08) and 0.31 (SE = 0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10) and declined in Segment 5 (−0.37; SE = 0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy makers are urged to develop guidelines, ensure active dissemination and enable accessibility through computer networks to contain antibiotic use and decrease antibiotic pressure. PMID:24647339

  6. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Xiubin; Gao, Yaozong; Shen, Dinggang, E-mail: dgshen@med.unc.edu

    2015-05-15

    Purpose: In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. Methods: To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as amore » detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. Results: The experimental results on 330 images of 24 patients show the effectiveness of the authors’ proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors’ method achieves the best performance. Conclusions: By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors’ proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.« less

  7. Estimation of Total Length of Femur from its Proximal and Distal Segmental Measurements of Disarticulated Femur Bones of Nepalese Population using Regression Equation Method.

    PubMed

    Khanal, Laxman; Shah, Sandip; Koirala, Sarun

    2017-03-01

    Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.

  8. Interrupted time-series analysis of regulations to reduce paracetamol (acetaminophen) poisoning.

    PubMed

    Morgan, Oliver W; Griffiths, Clare; Majeed, Azeem

    2007-04-01

    Paracetamol (acetaminophen) poisoning is the leading cause of acute liver failure in Great Britain and the United States. Successful interventions to reduced harm from paracetamol poisoning are needed. To achieve this, the government of the United Kingdom introduced legislation in 1998 limiting the pack size of paracetamol sold in shops. Several studies have reported recent decreases in fatal poisonings involving paracetamol. We use interrupted time-series analysis to evaluate whether the recent fall in the number of paracetamol deaths is different to trends in fatal poisoning involving aspirin, paracetamol compounds, antidepressants, or nondrug poisoning suicide. We calculated directly age-standardised mortality rates for paracetamol poisoning in England and Wales from 1993 to 2004. We used an ordinary least-squares regression model divided into pre- and postintervention segments at 1999. The model included a term for autocorrelation within the time series. We tested for changes in the level and slope between the pre- and postintervention segments. To assess whether observed changes in the time series were unique to paracetamol, we compared against poisoning deaths involving compound paracetamol (not covered by the regulations), aspirin, antidepressants, and nonpoisoning suicide deaths. We did this comparison by calculating a ratio of each comparison series with paracetamol and applying a segmented regression model to the ratios. No change in the ratio level or slope indicated no difference compared to the control series. There were about 2,200 deaths involving paracetamol. The age-standardised mortality rate rose from 8.1 per million in 1993 to 8.8 per million in 1997, subsequently falling to about 5.3 per million in 2004. After the regulations were introduced, deaths dropped by 2.69 per million (p = 0.003). Trends in the age-standardised mortality rate for paracetamol compounds, aspirin, and antidepressants were broadly similar to paracetamol, increasing until 1997 and then declining. Nondrug poisoning suicide also declined during the study period, but was highest in 1993. The segmented regression models showed that the age-standardised mortality rate for compound paracetamol dropped less after the regulations (p = 0.012) but declined more rapidly afterward (p = 0.031). However, age-standardised rates for aspirin and antidepressants fell in a similar way to paracetamol after the regulations. Nondrug poisoning suicide declined at a similar rate to paracetamol after the regulations were introduced. Introduction of regulations to limit availability of paracetamol coincided with a decrease in paracetamol-poisoning mortality. However, fatal poisoning involving aspirin, antidepressants, and to a lesser degree, paracetamol compounds, also showed similar trends. This raises the question whether the decline in paracetamol deaths was due to the regulations or was part of a wider trend in decreasing drug-poisoning mortality. We found little evidence to support the hypothesis that the 1998 regulations limiting pack size resulted in a greater reduction in poisoning deaths involving paracetamol than occurred for other drugs or nondrug poisoning suicide.

  9. Interrupted Time-Series Analysis of Regulations to Reduce Paracetamol (Acetaminophen) Poisoning

    PubMed Central

    Morgan, Oliver W; Griffiths, Clare; Majeed, Azeem

    2007-01-01

    Background Paracetamol (acetaminophen) poisoning is the leading cause of acute liver failure in Great Britain and the United States. Successful interventions to reduced harm from paracetamol poisoning are needed. To achieve this, the government of the United Kingdom introduced legislation in 1998 limiting the pack size of paracetamol sold in shops. Several studies have reported recent decreases in fatal poisonings involving paracetamol. We use interrupted time-series analysis to evaluate whether the recent fall in the number of paracetamol deaths is different to trends in fatal poisoning involving aspirin, paracetamol compounds, antidepressants, or nondrug poisoning suicide. Methods and Findings We calculated directly age-standardised mortality rates for paracetamol poisoning in England and Wales from 1993 to 2004. We used an ordinary least-squares regression model divided into pre- and postintervention segments at 1999. The model included a term for autocorrelation within the time series. We tested for changes in the level and slope between the pre- and postintervention segments. To assess whether observed changes in the time series were unique to paracetamol, we compared against poisoning deaths involving compound paracetamol (not covered by the regulations), aspirin, antidepressants, and nonpoisoning suicide deaths. We did this comparison by calculating a ratio of each comparison series with paracetamol and applying a segmented regression model to the ratios. No change in the ratio level or slope indicated no difference compared to the control series. There were about 2,200 deaths involving paracetamol. The age-standardised mortality rate rose from 8.1 per million in 1993 to 8.8 per million in 1997, subsequently falling to about 5.3 per million in 2004. After the regulations were introduced, deaths dropped by 2.69 per million (p = 0.003). Trends in the age-standardised mortality rate for paracetamol compounds, aspirin, and antidepressants were broadly similar to paracetamol, increasing until 1997 and then declining. Nondrug poisoning suicide also declined during the study period, but was highest in 1993. The segmented regression models showed that the age-standardised mortality rate for compound paracetamol dropped less after the regulations (p = 0.012) but declined more rapidly afterward (p = 0.031). However, age-standardised rates for aspirin and antidepressants fell in a similar way to paracetamol after the regulations. Nondrug poisoning suicide declined at a similar rate to paracetamol after the regulations were introduced. Conclusions Introduction of regulations to limit availability of paracetamol coincided with a decrease in paracetamol-poisoning mortality. However, fatal poisoning involving aspirin, antidepressants, and to a lesser degree, paracetamol compounds, also showed similar trends. This raises the question whether the decline in paracetamol deaths was due to the regulations or was part of a wider trend in decreasing drug-poisoning mortality. We found little evidence to support the hypothesis that the 1998 regulations limiting pack size resulted in a greater reduction in poisoning deaths involving paracetamol than occurred for other drugs or nondrug poisoning suicide. PMID:17407385

  10. Assessment of LVEF using a new 16-segment wall motion score in echocardiography.

    PubMed

    Lebeau, Real; Serri, Karim; Lorenzo, Maria Di; Sauvé, Claude; Le, Van Hoai Viet; Soulières, Vicky; El-Rayes, Malak; Pagé, Maude; Zaïani, Chimène; Garot, Jérôme; Poulin, Frédéric

    2018-06-01

    Simpson biplane method and 3D by transthoracic echocardiography (TTE), radionuclide angiography (RNA) and cardiac magnetic resonance imaging (CMR) are the most accepted techniques for left ventricular ejection fraction (LVEF) assessment. Wall motion score index (WMSI) by TTE is an accepted complement. However, the conversion from WMSI to LVEF is obtained through a regression equation, which may limit its use. In this retrospective study, we aimed to validate a new method to derive LVEF from the wall motion score in 95 patients. The new score consisted of attributing a segmental EF to each LV segment based on the wall motion score and averaging all 16 segmental EF into a global LVEF. This segmental EF score was calculated on TTE in 95 patients, and RNA was used as the reference LVEF method. LVEF using the new segmental EF 15-40-65 score on TTE was compared to the reference methods using linear regression and Bland-Altman analyses. The median LVEF was 45% (interquartile range 32-53%; range from 15 to 65%). Our new segmental EF 15-40-65 score derived on TTE correlated strongly with RNA-LVEF ( r  = 0.97). Overall, the new score resulted in good agreement of LVEF compared to RNA (mean bias 0.61%). The standard deviations (s.d.s) of the distributions of inter-method difference for the comparison of the new score with RNA were 6.2%, indicating good precision. LVEF assessment using segmental EF derived from the wall motion score applied to each of the 16 LV segments has excellent correlation and agreement with a reference method. © 2018 The authors.

  11. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  12. Left ventricular ejection fraction to predict early mortality in patients with non-ST-segment elevation acute coronary syndromes.

    PubMed

    Bosch, Xavier; Théroux, Pierre

    2005-08-01

    Improvement in risk stratification of patients with non-ST-segment elevation acute coronary syndrome (ACS) is a gateway to a more judicious treatment. This study examines whether the routine determination of left ventricular ejection fraction (EF) adds significant prognostic information to currently recommended stratifiers. Several predictors of inhospital mortality were prospectively characterized in a registry study of 1104 consecutive patients, for whom an EF was determined, who were admitted for an ACS. Multiple regression models were constructed using currently recommended clinical, electrocardiographic, and blood marker stratifiers, and values of EF were incorporated into the models. Age, ST-segment shifts, elevation of cardiac markers, and the Thrombolysis in Myocardial Infarction (TIMI) risk score all predicted mortality (P < .0001). Adding EF into the model improved the prediction of mortality (C statistic 0.73 vs 0.67). The odds of death increased by a factor of 1.042 for each 1% decrement in EF. By receiver operating curves, an EF cutoff of 48% provided the best predictive value. Mortality rates were 3.3 times higher within each TIMI risk score stratum in patients with an EF of 48% or lower as compared with those with higher. The TIMI risk score predicts inhospital mortality in a broad population of patients with ACS. The further consideration of EF adds significant prognostic information.

  13. The spectral details of observed and simulated short-term water vapor feedbacks of El Niño-Southern Oscillation

    NASA Astrophysics Data System (ADS)

    Pan, F.; Huang, X.; Chen, X.

    2015-12-01

    Radiative kernel method has been validated and widely used in the study of climate feedbacks. This study uses spectrally resolved longwave radiative kernels to examine the short-term water vapor feedbacks associated with the ENSO cycles. Using a 500-year GFDL CM3 and a 100-year NCAR CCSM4 pre-industry control simulation, we have constructed two sets of longwave spectral radiative kernels. We then composite El Niño, La Niña and ENSO-neutral states and estimate the water vapor feedbacks associated with the El Niño and La Niña phases of ENSO cycles in both simulations. Similar analysis is also applied to 35-year (1979-2014) ECMWF ERA-interim reanalysis data, which is deemed as observational results here. When modeled and observed broadband feedbacks are compared to each other, they show similar geographic patterns but with noticeable discrepancies in the contrast between the tropics and extra-tropics. Especially, in El Niño phase, the feedback estimated from reanalysis is much greater than those from the model simulations. Considering the observational data span, we carry out a sensitivity test to explore the variability of feedback-deriving using 35-year data. To do so, we calculate the water vapor feedback within every 35-year segment of the GFDL CM3 control run by two methods: one is to composite El Nino or La Nina phases as mentioned above and the other is to regressing the TOA flux perturbation caused by water vapor change (δR_H­2O) against the global-mean surface temperature a­­­­nomaly. We find that the short-term feedback strengths derived from composite method can change considerably from one segment to another segment, while the feedbacks by regression method are less sensitive to the choice of segment and their strengths are also much smaller than those from composite analysis. This study suggests that caution is warranted in order to infer long-term feedbacks from a few decades of observations. When spectral details of the global-mean feedbacks are examined, more inconsistencies can be revealed in many spectral bands, especially H2O continuum absorption bands and window regions. These discrepancies can be attributed back to differences in observed and modeled water vapor profiles in responses to tropical SST.

  14. Impact of flavor attributes on consumer liking of Swiss cheese.

    PubMed

    Liggett, R E; Drake, M A; Delwiche, J F

    2008-02-01

    Although Swiss cheese is growing in popularity, no research has examined what flavor characteristics consumers desire in Swiss cheese, which was the main objective of this study. To this end, a large group of commercially available Swiss-type cheeses (10 domestic Swiss cheeses, 4 domestic Baby Swiss cheeses, and one imported Swiss Emmenthal) were assessed both by 12 trained panelists for flavor and feeling factors and by 101 consumers for overall liking. In addition, a separate panel of 24 consumers rated the same cheeses for dissimilarity. On the basis of liking ratings, the 101 consumers were segmented by cluster analysis into 2 groups: nondistinguishers (n = 40) and varying responders (n = 61). Partial least squares regression, a statistical modeling technique that relates 2 data sets (in this case, a set of descriptive analysis data and a set of consumer liking data), was used to determine which flavor attributes assessed by the trained panel were important variables in overall liking of the cheeses for the varying responders. The model explained 93% of the liking variance on 3 normally distributed components and had 49% predictability. Diacetyl, whey, milk fat, and umami were found to be drivers of liking, whereas cabbage, cooked, and vinegar were drivers of disliking. Nutty flavor was not particularly important to liking and it was present in only 2 of the cheeses. The dissimilarity ratings were combined with the liking ratings of both segments and analyzed by probabilistic multidimensional scaling. The ideals of each segment completely overlapped, with the variance of the varying responders being smaller than the variance of the non-distinguishers. This model indicated that the Baby Swiss cheeses were closer to the consumers' ideals than were the other cheeses. Taken together, the 2 models suggest that the partial least squares regression failed to capture one or more attributes that contribute to consumer acceptance, although the descriptive analysis of flavor and feeling factors was able to account for 93% of the variance in the liking ratings. These findings indicate the flavor characteristics Swiss cheese producers should optimize, and minimize, to create cheeses that best match consumer desires.

  15. Modelling the Carbon Stocks Estimation of the Tropical Lowland Dipterocarp Forest Using LIDAR and Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Zaki, N. A. M.; Latif, Z. A.; Suratman, M. N.; Zainal, M. Z.

    2016-06-01

    Tropical forest embraces a large stock of carbon in the global carbon cycle and contributes to the enormous amount of above and below ground biomass. The carbon kept in the aboveground living biomass of trees is typically the largest pool and the most directly impacted by the anthropogenic factor such as deforestation and forest degradation. However, fewer studies had been proposed to model the carbon for tropical rain forest and the quantification still remain uncertainties. A multiple linear regression (MLR) is one of the methods to define the relationship between the field inventory measurements and the statistical extracted from the remotely sensed data which is LiDAR and WorldView-3 imagery (WV-3). This paper highlight the model development from fusion of multispectral WV-3 with the LIDAR metrics to model the carbon estimation of the tropical lowland Dipterocarp forest of the study area. The result shown the over segmentation and under segmentation value for this output is 0.19 and 0.11 respectively, thus D-value for the classification is 0.19 which is 81%. Overall, this study produce a significant correlation coefficient (r) between Crown projection area (CPA) and Carbon stocks (CS); height from LiDAR (H_LDR) and Carbon stocks (CS); and Crown projection area (CPA) and height from LiDAR (H_LDR) were shown 0.671, 0.709 and 0.549 respectively. The CPA of the segmentation found to be representative spatially with higher correlation of relationship between diameter at the breast height (DBH) and carbon stocks which is Pearson Correlation p = 0.000 (p < 0.01) with correlation coefficient (r) is 0.909 which shown that there a good relationship between carbon and DBH predictors to improve the inventory estimates of carbon using multiple linear regression method. The study concluded that the integration of WV-3 imagery with the CHM raster based LiDAR were useful in order to quantify the AGB and carbon stocks for a larger sample area of the Lowland Dipterocarp forest.

  16. Right ventricle functional parameters estimation in arrhythmogenic right ventricular dysplasia using a robust shape based deformable model.

    PubMed

    Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad

    2014-07-01

    Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.

  17. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    NASA Astrophysics Data System (ADS)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  18. A study of riders' noise exposure on Bay Area Rapid Transit trains.

    PubMed

    Dinno, Alexis; Powell, Cynthia; King, Margaret Mary

    2011-02-01

    Excessive noise exposure may present a hazard to hearing, cardiovascular, and psychosomatic health. Mass transit systems, such as the Bay Area Rapid Transit (BART) system, are potential sources of excessive noise. The purpose of this study was to characterize transit noise and riders' exposure to noise on the BART system using three dosimetry metrics. We made 268 dosimetry measurements on a convenience sample of 51 line segments. Dosimetry measures were modeled using linear and nonlinear multiple regression as functions of average velocity, tunnel enclosure, flooring, and wet weather conditions and presented visually on a map of the BART system. This study provides evidence of levels of hazardous levels of noise exposure in all three dosimetry metrics. L(eq) and L(max) measures indicate exposures well above ranges associated with increased cardiovascular and psychosomatic health risks in the published literature. L(peak) indicate acute exposures hazardous to adult hearing on about 1% of line segment rides and acute exposures hazardous to child hearing on about 2% of such rides. The noise to which passengers are exposed may be due to train-specific conditions (velocity and flooring), but also to rail conditions (velocity and tunnels). These findings may point at possible remediation (revised speed limits on longer segments and those segments enclosed by tunnels). The findings also suggest that specific rail segments could be improved for noise.

  19. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children

    PubMed Central

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G.; Kakebeeke, Tanja H.; Leeger-Aschmann, Claudia S.; Messerli-Bürgy, Nadine; Meyer, Andrea H.; Munsch, Simone; Puder, Jardena J.; Schmutz, Einat A.; Stülb, Kerstin; Zysset, Annina E.; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2–6 years. Ambulatory electrocardiograms were collected over 14–18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible. PMID:28286485

  20. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children.

    PubMed

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G; Kakebeeke, Tanja H; Leeger-Aschmann, Claudia S; Messerli-Bürgy, Nadine; Meyer, Andrea H; Munsch, Simone; Puder, Jardena J; Schmutz, Einat A; Stülb, Kerstin; Zysset, Annina E; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2-6 years. Ambulatory electrocardiograms were collected over 14-18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible.

  1. Public reporting influences antibiotic and injection prescription in primary care: a segmented regression analysis.

    PubMed

    Liu, Chenxi; Zhang, Xinping; Wan, Jie

    2015-08-01

    Inappropriate use and overuse of antibiotics and injections are serious threats to the global population, particularly in developing countries. In recent decades, public reporting of health care performance (PRHCP) has been an instrument to improve the quality of care. However, existing evidence shows a mixed effect of PRHCP. This study evaluated the effect of PRHCP on physicians' prescribing practices in a sample of primary care institutions in China. Segmented regression analysis was used to produce convincing evidence for health policy and reform. The PRHCP intervention was implemented in Qian City that started on 1 October 2013. Performance data on prescription statistics were disclosed to patients and health workers monthly in 10 primary care institutions. A total of 326 655 valid outpatient prescriptions were collected. Monthly effective prescriptions were calculated as analytical units in the research (1st to 31st every month). This study involved multiple assessments of outcomes 13 months before and 11 months after PRHCP intervention (a total of 24 data points). Segmented regression models showed downward trends from baseline on antibiotics (coefficient = -0.64, P = 0.004), combined use of antibiotics (coefficient = -0.41, P < 0.001) and injections (coefficient = -0.5957, P = 0.001) after PRHCP intervention. The average expenditure of patients slightly increased monthly before the intervention (coefficient = 0.8643, P < 0.001); PRHCP intervention also led to a temporary increase in average expenditure of patients (coefficient = 2.20, P = 0.307) but slowed down the ascending trend (coefficient = -0.45, P = 0.033). The prescription rate of antibiotics and injections after intervention (about 50%) remained high. PRHCP showed positive effects on physicians' prescribing behaviour, considering the downward trends on the use of antibiotics and injections and average expenditure through the intervention. However, the effect was not immediately observed; a lag time existed before public reporting intervention worked. © 2015 John Wiley & Sons, Ltd.

  2. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  3. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  4. Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

    PubMed

    Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C

    2017-07-01

    To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.

  5. Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram

    PubMed Central

    Kim, Jongin; Park, Hyeong-jun

    2016-01-01

    The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128

  6. Stature estimation from the lengths of the growing foot-a study on North Indian adolescents.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam; DiMaggio, John A

    2012-12-01

    Stature estimation is considered as one of the basic parameters of the investigation process in unknown and commingled human remains in medico-legal case work. Race, age and sex are the other parameters which help in this process. Stature estimation is of the utmost importance as it completes the biological profile of a person along with the other three parameters of identification. The present research is intended to formulate standards for stature estimation from foot dimensions in adolescent males from North India and study the pattern of foot growth during the growing years. 154 male adolescents from the Northern part of India were included in the study. Besides stature, five anthropometric measurements that included the length of the foot from each toe (T1, T2, T3, T4, and T5 respectively) to pternion were measured on each foot. The data was analyzed statistically using Student's t-test, Pearson's correlation, linear and multiple regression analysis for estimation of stature and growth of foot during ages 13-18 years. Correlation coefficients between stature and all the foot measurements were found to be highly significant and positively correlated. Linear regression models and multiple regression models (with age as a co-variable) were derived for estimation of stature from the different measurements of the foot. Multiple regression models (with age as a co-variable) estimate stature with greater accuracy than the regression models for 13-18 years age group. The study shows the growth pattern of feet in North Indian adolescents and indicates that anthropometric measurements of the foot and its segments are valuable in estimation of stature in growing individuals of that population. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Classification of individual well-being scores for the determination of adverse health and productivity outcomes in employee populations.

    PubMed

    Shi, Yuyan; Sears, Lindsay E; Coberley, Carter R; Pope, James E

    2013-04-01

    Adverse health and productivity outcomes have imposed a considerable economic burden on employers. To facilitate optimal worksite intervention designs tailored to differing employee risk levels, the authors established cutoff points for an Individual Well-Being Score (IWBS) based on a global measure of well-being. Cross-sectional associations between IWBS and adverse health and productivity outcomes, including high health care cost, emergency room visits, short-term disability days, absenteeism, presenteeism, low job performance ratings, and low intentions to stay with the employer, were studied in a sample of 11,702 employees from a large employer. Receiver operating characteristics curves were evaluated to detect a single optimal cutoff value of IWBS for predicting 2 or more adverse outcomes. More granular segmentation was achieved by computing relative risks of each adverse outcome from logistic regressions accounting for sociodemographic characteristics. Results showed strong and significant nonlinear associations between IWBS and health and productivity outcomes. An IWBS of 75 was found to be the optimal single cutoff point to discriminate 2 or more adverse outcomes. Logistic regression models found abrupt reductions of relative risk also clustered at IWBS cutoffs of 53, 66, and 88, in addition to 75, which segmented employees into high, high-medium, medium, low-medium, and low risk groups. To determine validity and generalizability, cutoff values were applied in a smaller employee population (N=1853) and confirmed significant differences between risk groups across health and productivity outcomes. The reported segmentation of IWBS into discrete cohorts based on risk of adverse health and productivity outcomes should facilitate well-being comparisons and worksite interventions.

  8. Development of Relations of Stream Stage to Channel Geometry and Discharge for Stream Segments Simulated with Hydrologic Simulation Program-Fortran (HSPF), Chesapeake Bay Watershed and Adjacent Parts of Virginia, Maryland, and Delaware

    USGS Publications Warehouse

    Moyer, Douglas; Bennett, Mark

    2007-01-01

    The U.S. Geological Survey (USGS), U.S. Environmental Protection Agency (USEPA), Chesapeake Bay Program (CBP), Interstate Commission for the Potomac River Basin (ICPRB), Maryland Department of the Environment (MDE), Virginia Department of Conservation and Recreation (VADCR), and University of Maryland (UMD) are collaborating to improve the resolution of the Chesapeake Bay Regional Watershed Model (CBRWM). This watershed model uses the Hydrologic Simulation Program-Fortran (HSPF) to simulate the fate and transport of nutrients and sediment throughout the Chesapeake Bay watershed and extended areas of Virginia, Maryland, and Delaware. Information from the CBRWM is used by the CBP and other watershed managers to assess the effectiveness of water-quality improvement efforts as well as guide future management activities. A critical step in the improvement of the CBRWM framework was the development of an HSPF function table (FTABLE) for each represented stream channel. The FTABLE is used to relate stage (water depth) in a particular stream channel to associated channel surface area, channel volume, and discharge (streamflow). The primary tool used to generate an FTABLE for each stream channel is the XSECT program, a computer program that requires nine input variables used to represent channel morphology. These input variables are reach length, upstream and downstream elevation, channel bottom width, channel bankfull width, channel bankfull stage, slope of the floodplain, and Manning's roughness coefficient for the channel and floodplain. For the purpose of this study, the nine input variables were grouped into three categories: channel geometry, Manning's roughness coefficient, and channel and floodplain slope. Values of channel geometry for every stream segment represented in CBRWM were obtained by first developing regional regression models that relate basin drainage area to observed values of bankfull width, bankfull depth, and bottom width at each of the 290 USGS streamflow-gaging stations included in the areal extent of the model. These regression models were developed on the basis of data from stations in four physiographic provinces (Appalachian Plateaus, Valley and Ridge, Piedmont, and Coastal Plain) and were used to predict channel geometry for all 738 stream segments in the modeled area from associated basin drainage area. Manning's roughness coefficient for the channel and floodplain was represented in the XSECT program in two forms. First, all available field-estimated values of roughness were compiled for gaging stations in each physiographic province. The median of field-estimated values of channel and floodplain roughness for each physiographic province was applied to all respective stream segments. The second representation of Manning's roughness coefficient was to allow roughness to vary with channel depth. Roughness was estimated at each gaging station for each 1-foot depth interval. Median values of roughness were calculated for each 1-foot depth interval for all stations in each physiographic province. Channel and floodplain slope were determined for every stream segment in CBRWM using the USGS National Elevation Dataset. Function tables were generated by the XSECT program using values of channel geometry, channel and floodplain roughness, and channel and floodplain slope. The FTABLEs for each of the 290 USGS streamflow-gaging stations were evaluated by comparing observed discharge to the XSECT-derived discharge. Function table stream discharge derived using depth-varying roughness was found to be more representative of and statistically indistinguishable from values of observed stream discharge. Additionally, results of regression analysis showed that XSECT-derived discharge accounted for approximately 90 percent of the variability associated with observed discharge in each of the four physiographic provinces. The results of this study indicate that the methodology developed to generate FTABLEs for every s

  9. Comparison of Triggering and Nontriggering Factors in ST-Segment Elevation Myocardial Infarction and Extent of Coronary Arterial Narrowing.

    PubMed

    Ben-Shoshan, Jeremy; Segman-Rosenstveig, Yafit; Arbel, Yaron; Chorin, Ehud; Barkagan, Michael; Rozenbaum, Zach; Granot, Yoav; Finkelstein, Ariel; Banai, Shmuel; Keren, Gad; Shacham, Yacov

    2016-04-15

    Various physical, emotional, and extrinsic triggers have been attributed to acute coronary syndrome. Whether a correlation can be drawn between identifiable ischemic triggers and the nature of coronary artery disease (CAD) still remains unclear. In the present study, we evaluated the correlation between triggered versus nontriggered ischemic symptoms and the extent of CAD in patients with ST-segment elevation myocardial infarction (STEMI). We conducted a retrospective, single-center observational study including 1,345 consecutive patients with STEMI, treated with primary percutaneous coronary intervention. Acute physical and emotional triggers were identified in patients' historical data. Independent predictors of multivessel CAD were determined using a logistic regression model. A potential trigger was identified in 37% of patients. Physical exertion was found to be the most dominant trigger (65%) followed by psychological stress (16%) and acute illness (12%). Patients with nontriggered STEMI tended to be older and more likely to have co-morbidities. Patients with nontriggered STEMI showed a higher rate of multivessel CAD (73% vs 30%, p <0.001). In a multivariate regression model, nontriggered symptoms emerged as an independent predictor of multivessel CAD (odds ratio 8.33, 95% CI 5.74 to 12.5, p = 0.001). No specific trigger was found to predict independently the extent of CAD. In conclusion, symptoms onset without a recognizable trigger is associated with multivessel CAD in STEMI. Further studies will be required to elucidate the putative mechanisms underlying ischemic triggering. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Comparison of contact conditions obtained by direct simulation with statistical analysis for normally distributed isotropic surfaces

    NASA Astrophysics Data System (ADS)

    Uchidate, M.

    2018-09-01

    In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.

  11. The effect of obesity and gender on body segment parameters in older adults

    PubMed Central

    Chambers, April J.; Sukits, Alison L.; McCrory, Jean L.; Cham, Rakié

    2010-01-01

    Background Anthropometry is a necessary aspect of aging-related research, especially in biomechanics and injury prevention. Little information is available on inertial parameters in the geriatric population that account for gender and obesity effects. The goal of this study was to report body segment parameters in adults aged 65 years and older, and to investigate the impact of aging, gender and obesity. Methods Eighty-three healthy old (65–75 yrs) and elderly (>75 yrs) adults were recruited to represent a range of body types. Participants underwent a whole body dual energy x-ray absorptiometry scan. Analysis was limited to segment mass, length, longitudinal center of mass position, and frontal plane radius of gyration. A mixed-linear regression model was performed using gender, obesity, age group and two-way and three-way interactions (α=0.05). Findings Mass distribution varied with obesity and gender. Males had greater trunk and upper extremity mass while females had a higher lower extremity mass. In general, obese elderly adults had significantly greater trunk segment mass with less thigh and shank segment mass than all others. Gender and obesity effects were found in center of mass and radius of gyration. Non-obese individuals possessed a more distal thigh and shank center of mass than obese. Interestingly, females had more distal trunk center of mass than males. Interpretation Age, obesity and gender have a significant impact on segment mass, center of mass and radius of gyration in old and elderly adults. This study underlines the need to consider age, obesity and gender when utilizing anthropometric data sets. PMID:20005028

  12. Assessing the impacts of dams and levees on the hydrologic record of the Middle and Lower Mississippi River, USA

    USGS Publications Warehouse

    Remo, Jonathan W.F.; Ickes, Brian; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.

    2018-01-01

    The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a ~1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.

  13. Assessing the impacts of dams and levees on the hydrologic record of the Middle and Lower Mississippi River, USA

    NASA Astrophysics Data System (ADS)

    Remo, Jonathan W. F.; Ickes, Brian S.; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.

    2018-07-01

    The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a 1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.

  14. Echocardiographic Image Quality Deteriorates with Age in Children and Young Adults with Duchenne Muscular Dystrophy.

    PubMed

    Power, Alyssa; Poonja, Sabrina; Disler, Dal; Myers, Kimberley; Patton, David J; Mah, Jean K; Fine, Nowell M; Greenway, Steven C

    2017-01-01

    Advances in medical care for patients with Duchenne muscular dystrophy (DMD) have resulted in improved survival and an increased prevalence of cardiomyopathy. Serial echocardiographic surveillance is recommended to detect early cardiac dysfunction and initiate medical therapy. Clinical anecdote suggests that echocardiographic quality diminishes over time, impeding accurate assessment of left ventricular systolic function. Furthermore, evidence-based guidelines for the use of cardiac imaging in DMD, including cardiac magnetic resonance imaging (CMR), are limited. The objective of our single-center, retrospective study was to quantify the deterioration in echocardiographic image quality with increasing patient age and identify an age at which CMR should be considered. We retrospectively reviewed and graded the image quality of serial echocardiograms obtained in young patients with DMD. The quality of 16 left ventricular segments in two echocardiographic views was visually graded using a binary scoring system. An endocardial border delineation percentage (EBDP) score was calculated by dividing the number of segments with adequate endocardial delineation in each imaging window by the total number of segments present in that window and multiplying by 100. Linear regression analysis was performed to model the relationship between the EBDP scores and patient age. Fifty-five echocardiograms from 13 patients (mean age 11.6 years, range 3.6-19.9) were systematically reviewed. By 13 years of age, 50% of the echocardiograms were classified as suboptimal with ≥30% of segments inadequately visualized, and by 15 years of age, 78% of studies were suboptimal. Linear regression analysis revealed a negative correlation between patient age and EBDP score ( r  = -2.49, 95% confidence intervals -4.73, -0.25; p  = 0.032), with the score decreasing by 2.5% for each 1 year increase in age. Echocardiographic image quality declines with increasing age in DMD. Alternate imaging modalities may play a role in cases of poor echocardiographic image quality.

  15. Estimation of total Length of Femur From Its Fragments in South Indian Population.

    PubMed

    Solan, Shweta; Kulkarni, Roopa

    2013-10-01

    Establishment of identity of deceased person also assumes a great medicolegal importance. To establish the identity of a person, stature is one of the criteria. To know stature of individual, length of long bones is needed. To determine the lengths of the femoral fragments and to compare with the total length of femur in south Indian population, which will help to estimate the stature of the individual using standard regression formulae. A number of 150, 72 left and 78 right adult fully ossified dry processed femora were taken. The femur bone was divided into five segments by taking predetermined points. Length of five segments and maximum length of femur were measured to the nearest millimeter. The values were obtained in cm [mean±S.D.] and the mean total length of femora on left and right side was measured. The proportion of segments to the total length was also calculated which will help for the stature estimation using standard regression formulae. The mean total length of femora on left side was 43.54 ± 2.7 and on right side it was 43.42 ± 2.4. The measurements of the segments-1, 2, 3, 4 and 5 were 8.06± 0.71, 8.25± 1.24, 10.35 ± 2.21, 13.94 ± 1.93 and 2.77 ± 0.53 on left side and 8.09 ± 0.70, 8.30 ± 1.34, 10.44 ± 1.91, 13.50 ± 1.54 and 3.09 ± 0.41 on right side of femur. The sample size was 150, 72 left and 78 right and 'p' value of all the segments was significant (‹0.001). When comparison was made between segments of right and left femora, the 'p' value of segment-5 was found to be ‹0.001. Comparison between different segments of femur showed significance in all the segments.

  16. Reconstructing land use history from Landsat time-series. Case study of a swidden agriculture system in Brazil

    NASA Astrophysics Data System (ADS)

    Dutrieux, Loïc P.; Jakovac, Catarina C.; Latifah, Siti H.; Kooistra, Lammert

    2016-05-01

    We developed a method to reconstruct land use history from Landsat images time-series. The method uses a breakpoint detection framework derived from the econometrics field and applicable to time-series regression models. The Breaks For Additive Season and Trend (BFAST) framework is used for defining the time-series regression models which may contain trend and phenology, hence appropriately modelling vegetation intra and inter-annual dynamics. All available Landsat data are used for a selected study area, and the time-series are partitioned into segments delimited by breakpoints. Segments can be associated to land use regimes, while the breakpoints then correspond to shifts in land use regimes. In order to further characterize these shifts, we classified the unlabelled breakpoints returned by the algorithm into their corresponding processes. We used a Random Forest classifier, trained from a set of visually interpreted time-series profiles to infer the processes and assign labels to the breakpoints. The whole approach was applied to quantifying the number of cultivation cycles in a swidden agriculture system in Brazil (state of Amazonas). Number and frequency of cultivation cycles is of particular ecological relevance in these systems since they largely affect the capacity of the forest to regenerate after land abandonment. We applied the method to a Landsat time-series of Normalized Difference Moisture Index (NDMI) spanning the 1984-2015 period and derived from it the number of cultivation cycles during that period at the individual field scale level. Agricultural fields boundaries used to apply the method were derived using a multi-temporal segmentation approach. We validated the number of cultivation cycles predicted by the method against in-situ information collected from farmers interviews, resulting in a Normalized Residual Mean Squared Error (NRMSE) of 0.25. Overall the method performed well, producing maps with coherent spatial patterns. We identified various sources of error in the approach, including low data availability in the 90s and sub-object mixture of land uses. We conclude that the method holds great promise for land use history mapping in the tropics and beyond.

  17. Reconstructing Land Use History from Landsat Time-Series. Case study of Swidden Agriculture Intensification in Brazil

    NASA Astrophysics Data System (ADS)

    Dutrieux, L.; Jakovac, C. C.; Siti, L. H.; Kooistra, L.

    2015-12-01

    We developed a method to reconstruct land use history from Landsat images time-series. The method uses a breakpoint detection framework derived from the econometrics field and applicable to time-series regression models. The BFAST framework is used for defining the time-series regression models which may contain trend and phenology, hence appropriately modelling vegetation intra and inter-annual dynamics. All available Landsat data are used, and the time-series are partitioned into segments delimited by breakpoints. Segments can be associated to land use regimes, while the breakpoints then correspond to shifts in regimes. To further characterize these shifts, we classified the unlabelled breakpoints returned by the algorithm into their corresponding processes. We used a Random Forest classifier, trained from a set of visually interpreted time-series profiles to infer the processes and assign labels to the breakpoints. The whole approach was applied to quantifying the number of cultivation cycles in a swidden agriculture system in Brazil. Number and frequency of cultivation cycles is of particular ecological relevance in these systems since they largely affect the capacity of the forest to regenerate after abandonment. We applied the method to a Landsat time-series of Normalized Difference Moisture Index (NDMI) spanning the 1984-2015 period and derived from it the number of cultivation cycles during that period at the individual field scale level. Agricultural fields boundaries used to apply the method were derived using a multi-temporal segmentation. We validated the number of cultivation cycles predicted against in-situ information collected from farmers interviews, resulting in a Normalized RMSE of 0.25. Overall the method performed well, producing maps with coherent patterns. We identified various sources of error in the approach, including low data availability in the 90s and sub-object mixture of land uses. We conclude that the method holds great promise for land use history mapping in the tropics and beyond. Spatial and temporal patterns were further analysed with an ecological perspective in a follow-up study. Results show that changes in land use patterns such as land use intensification and reduced agricultural expansion reflect the socio-economic transformations that occurred in the region

  18. Simple agrometeorological models for estimating Guineagrass yield in Southeast Brazil.

    PubMed

    Pezzopane, José Ricardo Macedo; da Cruz, Pedro Gomes; Santos, Patricia Menezes; Bosi, Cristiam; de Araujo, Leandro Coelho

    2014-09-01

    The objective of this work was to develop and evaluate agrometeorological models to simulate the production of Guineagrass. For this purpose, we used forage yield from 54 growing periods between December 2004-January 2007 and April 2010-March 2012 in irrigated and non-irrigated pastures in São Carlos, São Paulo state, Brazil (latitude 21°57'42″ S, longitude 47°50'28″ W and altitude 860 m). Initially we performed linear regressions between the agrometeorological variables and the average dry matter accumulation rate for irrigated conditions. Then we determined the effect of soil water availability on the relative forage yield considering irrigated and non-irrigated pastures, by means of segmented linear regression among water balance and relative production variables (dry matter accumulation rates with and without irrigation). The models generated were evaluated with independent data related to 21 growing periods without irrigation in the same location, from eight growing periods in 2000 and 13 growing periods between December 2004-January 2007 and April 2010-March 2012. The results obtained show the satisfactory predictive capacity of the agrometeorological models under irrigated conditions based on univariate regression (mean temperature, minimum temperature and potential evapotranspiration or degreedays) or multivariate regression. The response of irrigation on production was well correlated with the climatological water balance variables (ratio between actual and potential evapotranspiration or between actual and maximum soil water storage). The models that performed best for estimating Guineagrass yield without irrigation were based on minimum temperature corrected by relative soil water storage, determined by the ratio between the actual soil water storage and the soil water holding capacity.irrigation in the same location, in 2000, 2010 and 2011. The results obtained show the satisfactory predictive capacity of the agrometeorological models under irrigated conditions based on univariate regression (mean temperature, potential evapotranspiration or degree-days) or multivariate regression. The response of irrigation on production was well correlated with the climatological water balance variables (ratio between actual and potential evapotranspiration or between actual and maximum soil water storage). The models that performed best for estimating Guineagrass yield without irrigation were based on degree-days corrected by the water deficit factor.

  19. PREDICTION OF MALIGNANT BREAST LESIONS FROM MRI FEATURES: A COMPARISON OF ARTIFICIAL NEURAL NETWORK AND LOGISTIC REGRESSION TECHNIQUES

    PubMed Central

    McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying

    2009-01-01

    Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817

  20. Transoral Decompression and Anterior Stabilization of Atlantoaxial Joint in Patients with Basilar Impression and Chiari Malformation Type I: A Technical Report of 2 Clinical Cases.

    PubMed

    Shkarubo, Alexey N; Kuleshov, Alexander A; Chernov, Ilia V; Vetrile, Marchel S

    2017-06-01

    Presentation of clinical cases involving successful anterior stabilization of the C1-C2 segment in patients with invaginated C2 odontoid process and Chiari malformation type I. Clinical case description. Two patients with C2 odontoid processes invagination and Chiari malformation type I were surgically treated using the transoral approach. In both cases, anterior decompression of the upper cervical region was performed, followed by anterior stabilization of the C1-C2 segment. In 1 of the cases, this procedure was performed after posterior decompression, which led to transient regression of neurologic symptoms. In both cases, custom-made cervical plates were used for anterior stabilization of the C1-C2 segment. During the follow-up period of more than 2 years, a persistent regression of both the neurologic symptoms and Chiari malformation was observed. Anterior decompression followed by anterior stabilization of the C1-C2 segment is a novel and promising approach to treating Chiari malformation type I in association with C2 odontoid process invagination. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  2. The chick embryo: a leading model in somitogenesis studies.

    PubMed

    Pourquié, Olivier

    2004-09-01

    The vertebrate body is built on a metameric organization which consists of a repetition of functionally equivalent units, each comprising a vertebra, its associated muscles, peripheral nerves and blood vessels. This periodic pattern is established during embryogenesis by the somitogenesis process. Somites are generated in a rhythmic fashion from the presomitic mesoderm and they subsequently differentiate to give rise to the vertebrae and skeletal muscles of the body. Somitogenesis has been very actively studied in the chick embryo since the 19th century and many of the landmark experiments that led to our current understanding of the vertebrate segmentation process have been performed in this organism. Somite formation involves an oscillator, the segmentation clock whose periodic signal is converted into the periodic array of somite boundaries by a spacing mechanism relying on a traveling threshold of FGF signaling regressing in concert with body axis extension.

  3. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  4. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  5. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  6. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  7. A parametric ribcage geometry model accounting for variations among the adult population.

    PubMed

    Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2016-09-06

    The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Multiple linear regression approach for the analysis of the relationships between joints mobility and regional pressure-based parameters in the normal-arched foot.

    PubMed

    Caravaggi, Paolo; Leardini, Alberto; Giacomozzi, Claudia

    2016-10-03

    Plantar load can be considered as a measure of the foot ability to transmit forces at the foot/ground, or foot/footwear interface during ambulatory activities via the lower limb kinematic chain. While morphological and functional measures have been shown to be correlated with plantar load, no exhaustive data are currently available on the possible relationships between range of motion of foot joints and plantar load regional parameters. Joints' kinematics from a validated multi-segmental foot model were recorded together with plantar pressure parameters in 21 normal-arched healthy subjects during three barefoot walking trials. Plantar pressure maps were divided into six anatomically-based regions of interest associated to corresponding foot segments. A stepwise multiple regression analysis was performed to determine the relationships between pressure-based parameters, joints range of motion and normalized walking speed (speed/subject height). Sagittal- and frontal-plane joint motion were those most correlated to plantar load. Foot joints' range of motion and normalized walking speed explained between 6% and 43% of the model variance (adjusted R 2 ) for pressure-based parameters. In general, those joints' presenting lower mobility during stance were associated to lower vertical force at forefoot and to larger mean and peak pressure at hindfoot and forefoot. Normalized walking speed was always positively correlated to mean and peak pressure at hindfoot and forefoot. While a large variance in plantar pressure data is still not accounted for by the present models, this study provides statistical corroboration of the close relationship between joint mobility and plantar pressure during stance in the normal healthy foot. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Influence of riparian and watershed alterations on sandbars in a Great Plains river

    USGS Publications Warehouse

    Fischer, Jeffrey M.; Paukert, Craig P.; Daniels, M.L.

    2014-01-01

    Anthropogenic alterations have caused sandbar habitats in rivers and the biota dependent on them to decline. Restoring large river sandbars may be needed as these habitats are important components of river ecosystems and provide essential habitat to terrestrial and aquatic organisms. We quantified factors within the riparian zone of the Kansas River, USA, and within its tributaries that influenced sandbar size and density using aerial photographs and land use/land cover (LULC) data. We developed, a priori, 16 linear regression models focused on LULC at the local, adjacent upstream river bend, and the segment (18–44 km upstream) scales and used an information theoretic approach to determine what alterations best predicted the size and density of sandbars. Variation in sandbar density was best explained by the LULC within contributing tributaries at the segment scale, which indicated reduced sandbar density with increased forest cover within tributary watersheds. Similarly, LULC within contributing tributary watersheds at the segment scale best explained variation in sandbar size. These models indicated that sandbar size increased with agriculture and forest and decreased with urban cover within tributary watersheds. Our findings suggest that sediment supply and delivery from upstream tributary watersheds may be influential on sandbars within the Kansas River and that preserving natural grassland and reducing woody encroachment within tributary watersheds in Great Plains rivers may help improve sediment delivery to help restore natural river function.

  10. Foot and hip contributions to high frontal plane knee projection angle in athletes: a classification and regression tree approach.

    PubMed

    Bittencourt, Natalia F N; Ocarino, Juliana M; Mendonça, Luciana D M; Hewett, Timothy E; Fonseca, Sergio T

    2012-12-01

    Cross-sectional. To investigate predictors of increased frontal plane knee projection angle (FPKPA) in athletes. The underlying mechanisms that lead to increased FPKPA are likely multifactorial and depend on how the musculoskeletal system adapts to the possible interactions between its distal and proximal segments. Bivariate and linear analyses traditionally employed to analyze the occurrence of increased FPKPA are not sufficiently robust to capture complex relationships among predictors. The investigation of nonlinear interactions among biomechanical factors is necessary to further our understanding of the interdependence of lower-limb segments and resultant dynamic knee alignment. The FPKPA was assessed in 101 athletes during a single-leg squat and in 72 athletes at the moment of landing from a jump. The investigated predictors were sex, hip abductor isometric torque, passive range of motion (ROM) of hip internal rotation (IR), and shank-forefoot alignment. Classification and regression trees were used to investigate nonlinear interactions among predictors and their influence on the occurrence of increased FPKPA. During single-leg squatting, the occurrence of high FPKPA was predicted by the interaction between hip abductor isometric torque and passive hip IR ROM. At the moment of landing, the shank-forefoot alignment, abductor isometric torque, and passive hip IR ROM were predictors of high FPKPA. In addition, the classification and regression trees established cutoff points that could be used in clinical practice to identify athletes who are at potential risk for excessive FPKPA. The models captured nonlinear interactions between hip abductor isometric torque, passive hip IR ROM, and shank-forefoot alignment.

  11. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  12. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  13. Airway Tree Segmentation in Serial Block-Face Cryomicrotome Images of Rat Lungs

    PubMed Central

    Bauer, Christian; Krueger, Melissa A.; Lamm, Wayne J.; Smith, Brian J.; Glenny, Robb W.; Beichel, Reinhard R.

    2014-01-01

    A highly-automated method for the segmentation of airways in serial block-face cryomicrotome images of rat lungs is presented. First, a point inside of the trachea is manually specified. Then, a set of candidate airway centerline points is automatically identified. By utilizing a novel path extraction method, a centerline path between the root of the airway tree and each point in the set of candidate centerline points is obtained. Local disturbances are robustly handled by a novel path extraction approach, which avoids the shortcut problem of standard minimum cost path algorithms. The union of all centerline paths is utilized to generate an initial airway tree structure, and a pruning algorithm is applied to automatically remove erroneous subtrees or branches. Finally, a surface segmentation method is used to obtain the airway lumen. The method was validated on five image volumes of Sprague-Dawley rats. Based on an expert-generated independent standard, an assessment of airway identification and lumen segmentation performance was conducted. The average of airway detection sensitivity was 87.4% with a 95% confidence interval (CI) of (84.9, 88.6)%. A plot of sensitivity as a function of airway radius is provided. The combined estimate of airway detection specificity was 100% with a 95% CI of (99.4, 100)%. The average number and diameter of terminal airway branches was 1179 and 159 μm, respectively. Segmentation results include airways up to 31 generations. The regression intercept and slope of airway radius measurements derived from final segmentations were estimated to be 7.22 μm and 1.005, respectively. The developed approach enables quantitative studies of physiology and lung diseases in rats, requiring detailed geometric airway models. PMID:23955692

  14. Classification of Individual Well-Being Scores for the Determination of Adverse Health and Productivity Outcomes in Employee Populations

    PubMed Central

    Sears, Lindsay E.; Coberley, Carter R.; Pope, James E.

    2013-01-01

    Abstract Adverse health and productivity outcomes have imposed a considerable economic burden on employers. To facilitate optimal worksite intervention designs tailored to differing employee risk levels, the authors established cutoff points for an Individual Well-Being Score (IWBS) based on a global measure of well-being. Cross-sectional associations between IWBS and adverse health and productivity outcomes, including high health care cost, emergency room visits, short-term disability days, absenteeism, presenteeism, low job performance ratings, and low intentions to stay with the employer, were studied in a sample of 11,702 employees from a large employer. Receiver operating characteristics curves were evaluated to detect a single optimal cutoff value of IWBS for predicting 2 or more adverse outcomes. More granular segmentation was achieved by computing relative risks of each adverse outcome from logistic regressions accounting for sociodemographic characteristics. Results showed strong and significant nonlinear associations between IWBS and health and productivity outcomes. An IWBS of 75 was found to be the optimal single cutoff point to discriminate 2 or more adverse outcomes. Logistic regression models found abrupt reductions of relative risk also clustered at IWBS cutoffs of 53, 66, and 88, in addition to 75, which segmented employees into high, high-medium, medium, low-medium, and low risk groups. To determine validity and generalizability, cutoff values were applied in a smaller employee population (N=1853) and confirmed significant differences between risk groups across health and productivity outcomes. The reported segmentation of IWBS into discrete cohorts based on risk of adverse health and productivity outcomes should facilitate well-being comparisons and worksite interventions. (Population Health Management 2013;16:90–98) PMID:23013034

  15. Geomorphological and structural characterization of the southern Weihe Graben, central China: Implications for fault segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Yali; He, Chuanqi; Rao, Gang; Yan, Bing; Lin, Aiming; Hu, Jianmin; Yu, Yangli; Yao, Qi

    2018-01-01

    The Cenozoic graben systems around the tectonically stable Ordos Block, central China, have been considered as ideal places for investigating active deformation within continental rifts, such as the Weihe Graben at the southern margin with high historical seismicity (e.g., 1556 M 8.5 Huaxian great earthquake). However, previous investigations have mostly focused on the active structures in the eastern and northern parts of this graben. By contrast, in the southwest, tectonic activity along the northern margin of the Qinling Mountains has not been systematically investigated yet. In this study, based on digital elevation models (DEMs), we carried out geomorphological analysis to evaluate the relative tectonic activity along the whole South Border Fault (SBF). On the basis of field observations, high resolution DEMs acquired by small unmanned aerial vehicles (sUVA) using structure-for-motion techniques, radiocarbon (14C) age dating, we demonstrate that: 1) Tectonic activity along the SBF changes along strike, being higher in the eastern sector. 2) Seven major segment boundaries have been assigned, where the fault changes its strike and has lower tectonic activity. 3) The fault segment between the cities of Huaxian and Huayin characterized by almost pure normal slip has been active during the Holocene. We suggest that these findings would provide a basis for further investigating on the seismic risk in densely-populated Weihe Graben. Table S2. The values and classification of geomorphic indices obtained in this study. Fig. S1. Morphological features of the stream long profiles (Nos. 1-75) and corresponding SLK values. Fig. S2. Comparison of geomorphological parameters acquired from different DEMs (90-m SRTM and 30-m ASTER GDEM): (a) HI values; (b) HI linear regression; (c) mean slope of drainage basin; (d) mean slope linear regression.

  16. SU-C-BRA-07: Virtual Bronchoscopy-Guided IMRT Planning for Mapping and Avoiding Radiation Injury to the Airway Tree in Lung SAbR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawant, A; Modiri, A; Bland, R

    Purpose: Post-treatment radiation injury to central and peripheral airways is a potentially important, yet under-investigated determinant of toxicity in lung stereotactic ablative radiotherapy (SAbR). We integrate virtual bronchoscopy technology into the radiotherapy planning process to spatially map and quantify the radiosensitivity of bronchial segments, and propose novel IMRT planning that limits airway dose through non-isotropic intermediate- and low-dose spillage. Methods: Pre- and ∼8.5 months post-SAbR diagnostic-quality CT scans were retrospectively collected from six NSCLC patients (50–60Gy in 3–5 fractions). From each scan, ∼5 branching levels of the bronchial tree were segmented using LungPoint, a virtual bronchoscopic navigation system. The pre-SAbRmore » CT and the segmented bronchial tree were imported into the Eclipse treatment planning system and deformably registered to the planning CT. The five-fraction equivalent dose from the clinically-delivered plan was calculated for each segment using the Universal Survival Curve model. The pre- and post-SAbR CTs were used to evaluate radiation-induced segmental collapse. Two of six patients exhibited significant segmental collapse with associated atelectasis and fibrosis, and were re-planned using IMRT. Results: Multivariate stepwise logistic regression over six patients (81 segments) showed that D0.01cc (minimum point dose within the 0.01cc receiving highest dose) was a significant independent factor associated with collapse (odds-ratio=1.17, p=0.010). The D0.01cc threshold for collapse was 57Gy, above which, collapse rate was 45%. In the two patients exhibiting segmental collapse, 22 out of 32 segments showed D0.01cc >57Gy. IMRT re-planning reduced D0.01cc below 57Gy in 15 of the 22 segments (68%) while simultaneously achieving the original clinical plan objectives for PTV coverage and OAR-sparing. Conclusion: Our results indicate that the administration of lung SAbR can Result in significant injury to bronchial segments, potentially impairing post-SAbR lung function. To our knowledge, this is the first investigation of functional avoidance based on mapping and minimizing dose to individual bronchial segments. The presenting author receives research funding from Varian Medical Systems, Elekta, and VisionRT.« less

  17. MR elastography in primary sclerosing cholangitis: correlating liver stiffness with bile duct strictures and parenchymal changes.

    PubMed

    Bookwalter, Candice A; Venkatesh, Sudhakar K; Eaton, John E; Smyrk, Thomas D; Ehman, Richard L

    2018-04-07

    To determine correlation of liver stiffness measured by MR Elastography (MRE) with biliary abnormalities on MR Cholangiopancreatography (MRCP) and MRI parenchymal features in patients with primary sclerosing cholangitis (PSC). Fifty-five patients with PSC who underwent MRI of the liver with MRCP and MRE were retrospectively evaluated. Two board-certified abdominal radiologists in agreement reviewed the MRI, MRCP, and MRE images. The biliary tree was evaluated for stricture, dilatation, wall enhancement, and thickening at segmental duct, right main duct, left main duct, and common bile duct levels. Liver parenchyma features including signal intensity on T2W and DWI, and hyperenhancement in arterial, portal venous, and delayed phase were evaluated in nine Couinaud liver segments. Atrophy or hypertrophy of segments, cirrhotic morphology, varices, and splenomegaly were scored as present or absent. Regions of interest were placed in each of the nine segments on stiffness maps wherever available and liver stiffness (LS) was recorded. Mean segmental LS, right lobar (V-VIII), left lobar (I-III, and IVA, IVB), and global LS (average of all segments) were calculated. Spearman rank correlation analysis was performed for significant correlation. Features with significant correlation were then analyzed for significant differences in mean LS. Multiple regression analysis of MRI and MRCP features was performed for significant correlation with elevated LS. A total of 439/495 segments were evaluated and 56 segments not included in MRE slices were excluded for correlation analysis. Mean segmental LS correlated with the presence of strictures (r = 0.18, p < 0.001), T2W hyperintensity (r = 0.38, p < 0.001), DWI hyperintensity (r = 0.30, p < 0.001), and hyperenhancement of segment in all three phases. Mean LS of atrophic and hypertrophic segments were significantly higher than normal segments (7.07 ± 3.6 and 6.67 ± 3.26 vs. 5.1 ± 3.6 kPa, p < 0.001). In multiple regression analysis, only the presence of segmental strictures (p < 0.001), T2W hyperintensity (p = 0.01), and segmental hypertrophy (p < 0.001) were significantly associated with elevated segmental LS. Only left ductal stricture correlated with left lobe LS (r = 0.41, p = 0.018). Global LS correlated significantly with CBD stricture (r = 0.31, p = 0.02), number of segmental strictures (r = 0.28, p = 0.04), splenomegaly (r = 0.56, p < 0.001), and varices (r = 0.58, p < 0.001). In PSC, there is low but positive correlation between segmental LS and segmental duct strictures. Segments with increased LS show T2 hyperintensity, DWI hyperintensity, and post-contrast hyperenhancement. Global liver stiffness shows a moderate correlation with number of segmental strictures and significantly correlates with spleen stiffness, splenomegaly, and varices.

  18. Multivariate random-parameters zero-inflated negative binomial regression model: an application to estimate crash frequencies at intersections.

    PubMed

    Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan

    2014-09-01

    Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  20. Objective estimation of tropical cyclone innercore surface wind structure using infrared satellite images

    NASA Astrophysics Data System (ADS)

    Zhang, Changjiang; Dai, Lijie; Ma, Leiming; Qian, Jinfang; Yang, Bo

    2017-10-01

    An objective technique is presented for estimating tropical cyclone (TC) innercore two-dimensional (2-D) surface wind field structure using infrared satellite imagery and machine learning. For a TC with eye, the eye contour is first segmented by a geodesic active contour model, based on which the eye circumference is obtained as the TC eye size. A mathematical model is then established between the eye size and the radius of maximum wind obtained from the past official TC report to derive the 2-D surface wind field within the TC eye. Meanwhile, the composite information about the latitude of TC center, surface maximum wind speed, TC age, and critical wind radii of 34- and 50-kt winds can be combined to build another mathematical model for deriving the innercore wind structure. After that, least squares support vector machine (LSSVM), radial basis function neural network (RBFNN), and linear regression are introduced, respectively, in the two mathematical models, which are then tested with sensitivity experiments on real TC cases. Verification shows that the innercore 2-D surface wind field structure estimated by LSSVM is better than that of RBFNN and linear regression.

  1. Research in the application of spectral data to crop identification and assessment, volume 2

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T. (Principal Investigator); Hixson, M. M.; Bauer, M. E.

    1980-01-01

    The development of spectrometry crop development stage models is discussed with emphasis on models for corn and soybeans. One photothermal and four thermal meteorological models are evaluated. Spectral data were investigated as a source of information for crop yield models. Intercepted solar radiation and soil productivity are identified as factors related to yield which can be estimated from spectral data. Several techniques for machine classification of remotely sensed data for crop inventory were evaluated. Early season estimation, training procedures, the relationship of scene characteristics to classification performance, and full frame classification methods were studied. The optimal level for combining area and yield estimates of corn and soybeans is assessed utilizing current technology: digital analysis of LANDSAT MSS data on sample segments to provide area estimates and regression models to provide yield estimates.

  2. The effects of new pricing and copayment schemes for pharmaceuticals in South Korea.

    PubMed

    Lee, Iyn-Hyang; Bloor, Karen; Hewitt, Catherine; Maynard, Alan

    2012-01-01

    This study examined the effect of new Korean pricing and copayment schemes for pharmaceuticals (1) on per patient drug expenditure, utilisation and unit prices of overall pharmaceuticals; (2) on the utilisation of essential medications and (3) on the utilisation of less costly alternatives to the study medication. Interrupted time series analysis using retrospective observational data. The increasing trend of per patient drug expenditure fell gradually after the introduction of a new copayment scheme. The segmented regression model suggested that per patient drug expenditure might decrease by about 12% 1 year after the copayment increase, compared with the absence of such a policy, with few changes in overall utilisation and unit prices. The level of savings was much smaller when the new price scheme was included, while the effects of a price cut were inconclusive due to the short time period before an additional policy change. Based on the segmented regression models, we estimate that the number of patients filling their antihyperlipidemics prescriptions decreased by 18% in the corresponding period. Those prescribed generic and brand-named antihyperlipidemics declined by around 16 and 19%, respectively, indicating little evidence of generic substitution resulting from the copayment increase. Few changes were found in the use of antihypertensives. The policies under consideration appear to contain costs not by the intended mechanisms, such as substituting generics for brand name products, but by reducing patients' access to costly therapies regardless of clinical necessity. Thus, concerns were raised about potentially compromising overall health and loss of equity in pharmaceutical utilisation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Pondering the procephalon: the segmental origin of the labrum.

    PubMed

    Haas, M S; Brown, S J; Beeman, R W

    2001-02-01

    With accumulating evidence for the appendicular nature of the labrum, the question of its actual segmental origin remains. Two existing insect head segmentation models, the linear and S-models, are reviewed, and a new model introduced. The L-/Bent-Y model proposes that the labrum is a fusion of the appendage endites of the intercalary segment and that the stomodeum is tightly integrated into this segment. This model appears to explain a wider variety of insect head segmentation phenomena. Embryological, histological, neurological and molecular evidence supporting the new model is reviewed.

  4. Postural Consequences of Cervical Sagittal Imbalance: A Novel Laboratory Model.

    PubMed

    Patwardhan, Avinash G; Havey, Robert M; Khayatzadeh, Saeed; Muriuki, Muturi G; Voronov, Leonard I; Carandang, Gerard; Nguyen, Ngoc-Lam; Ghanayem, Alexander J; Schuit, Dale; Patel, Alpesh A; Smith, Zachary A; Sears, William

    2015-06-01

    A biomechanical study using human spine specimens. To study postural compensations in lordosis angles that are necessary to maintain horizontal gaze in the presence of forward head posture and increasing T1 sagittal tilt. Forward head posture relative to the shoulders, assessed radiographically using the horizontal offset distance between the C2 and C7 vertebral bodies (C2-C7 [sagittal vertical alignment] SVA), is a measure of global cervical imbalance. This may result from kyphotic alignment of cervical segments, muscle imbalance, as well as malalignment of thoracolumbar spine. Ten cadaveric cervical spines (occiput-T1) were tested. The T1 vertebra was anchored to a tilting and translating base. The occiput was free to move vertically but its angular orientation was constrained to ensure horizontal gaze regardless of sagittal imbalance. A 5-kg mass was attached to the occiput to mimic head weight. Forward head posture magnitude and T1 tilt were varied and motions of individual vertebrae were measured to calculate C2-C7 SVA and lordosis across C0-C2 and C2-C7. Increasing C2-C7 SVA caused flexion of lower cervical (C2-C7) segments and hyperextension of suboccipital (C0-C1-C2) segments to maintain horizontal gaze. Increasing kyphotic T1 tilt primarily increased lordosis across the C2-C7 segments. Regression models were developed to predict the compensatory C0-C2 and C2-C7 angulation needed to maintain horizontal gaze given values of C2-C7 SVA and T1 tilt. This study established predictive relationships between radiographical measures of forward head posture, T1 tilt, and postural compensations in the cervical lordosis angles needed to maintain horizontal gaze. The laboratory model predicted that normalization of C2-C7 SVA will reduce suboccipital (C0-C2) hyperextension, whereas T1 tilt reduction will reduce the hyperextension in the C2-C7 segments. The predictive relationships may help in planning corrective strategy in patients experiencing neck pain, which may be attributed to sagittal malalignment. N/A.

  5. Complete regression of myocardial involvement associated with lymphoma following chemotherapy.

    PubMed

    Vinicki, Juan Pablo; Cianciulli, Tomás F; Farace, Gustavo A; Saccheri, María C; Lax, Jorge A; Kazelian, Lucía R; Wachs, Adolfo

    2013-09-26

    Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence. We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an echocardiogram, presenting with a testicular mass and unilateral peripheral facial paralysis. On admission, electrocardiograms (ECG) revealed negative T-waves in all leads and ST-segment elevation in the inferior leads. On two-dimensional echocardiography, there was infiltration of the pericardium with mild effusion, infiltrative thickening of the aortic walls, both atria and the interatrial septum and a mildly depressed systolic function of both ventricles. An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma (T-LBL). Following the diagnosis and staging, chemotherapy was started. Twenty-two days after finishing the first cycle of chemotherapy, the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads. A follow-up Two-dimensional echocardiography confirmed regression of the myocardial infiltration. This case report illustrates a lymphoma presenting with testicular mass, unilateral peripheral facial paralysis and myocardial involvement, and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment. To our knowledge, there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis, with complete regression of myocardial involvement.

  6. Allometric associations between body size, shape, and 100-m butterfly speed performance.

    PubMed

    Sammoud, Senda; Nevill, Alan M; Negra, Yassine; Bouguezzi, Raja; Chaabene, Helmi; Hachana, Younés

    2018-05-01

    This study aimed to estimate the optimal body size, limb-segment length, and girth or breadth ratios associated with 100-m butterfly speed performance in swimmers. One-hundred-sixty-seven swimmers as subjects (male: N.=103; female: N.=64). Anthropometric measurements comprised height, body-mass, skinfolds, arm-span, upper-limb-length, upper-arm, forearm, hand-lengths, lower-limb-length, thigh-length, leg-length, foot-length, arm-relaxed-girth, forearm-girth, wrist-girth, thigh-girth, calf-girth, ankle-girth, biacromial and biiliocristal-breadths. To estimate the optimal body size and body composition components associated with 100-m butterfly speed performance, we adopted a multiplicative allometric log-linear regression model, which was refined using backward elimination. Fat-mass was the singularly most important whole-body characteristic. Height and body-mass did not contribute to the model. The allometric model identified that having greater limb segment length-ratio (arm-ratio = [arm-span]/[forearm]) and limb girth-ratio (girth-ratio = [calf-girth]/[ankle-girth]) were key to butterfly speed performance. A greater arm-span to forearm-length ratio and a greater calf to ankle-girth-ratio suggest that a combination of larger arm-span and shorter forearm-length and the combination of larger calves and smaller ankles-girth may benefit butterfly swim speed performance. In addition having greater biacromial and biliocristal breadths is also a major advantage in butterfly swimming speed performance. Finally, the estimation of these ratios was made possible by adopting a multiplicative allometric model that was able to confirm, theoretically, that swim speeds are nearly independent of total body size. The 100-m butterfly speed performance was strongly negatively associated with fat mass and positively associated with the segment length ratio (arm-span/forearm-length) and girth ratio (calf-girth)/(ankle-girth), having controlled for the developmental changes in age.

  7. Focused Assessment with Sonography for Trauma in weightlessness: a feasibility study

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, Andrew W.; Hamilton, Douglas R.; Nicolaou, Savvas; Sargsyan, Ashot E.; Campbell, Mark R.; Feiveson, Alan; Dulchavsky, Scott A.; Melton, Shannon; Beck, George; Dawson, David L.

    2003-01-01

    BACKGROUND: The Focused Assessment with Sonography for Trauma (FAST) examines for fluid in gravitationally dependent regions. There is no prior experience with this technique in weightlessness, such as on the International Space Station, where sonography is currently the only diagnostic imaging tool. STUDY DESIGN: A ground-based (1 g) porcine model for sonography was developed. We examined both the feasibility and the comparative performance of the FAST examination in parabolic flight. Sonographic detection and fluid behavior were evaluated in four animals during alternating weightlessness (0 g) and hypergravity (1.8 g) periods. During flight, boluses of fluid were incrementally introduced into the peritoneal cavity. Standardized sonographic windows were recorded. Postflight, the video recordings were divided into 169 20-second segments for subsequent interpretation by 12 blinded ultrasonography experts. Reviewers first decided whether a video segment was of sufficient diagnostic quality to analyze (determinate). Determinate segments were then analyzed as containing or not containing fluid. A probit regression model compared the probability of a positive fluid diagnosis to actual fluid levels (0 to 500 mL) under both 0-g and 1.8-g conditions. RESULTS: The in-flight sonographers found real-time scanning and interpretation technically similar to that of terrestrial conditions, as long as restraint was maintained. On blinded review, 80% of the recorded ultrasound segments were considered determinate. The best sensitivity for diagnosis in 0 g was found to be from the subhepatic space, with probability of a positive fluid diagnosis ranging from 9% (no fluid) to 51% (500 mL fluid). CONCLUSIONS: The FAST examination is technically feasible in weightlessness, and merits operational consideration for clinical contingencies in space.

  8. Estimation of total Length of Femur From Its Fragments in South Indian Population

    PubMed Central

    Solan, Shweta; Kulkarni, Roopa

    2013-01-01

    Introduction: Establishment of identity of deceased person also assumes a great medicolegal importance. To establish the identity of a person, stature is one of the criteria. To know stature of individual, length of long bones is needed. Aims and Objectives: To determine the lengths of the femoral fragments and to compare with the total length of femur in south Indian population, which will help to estimate the stature of the individual using standard regression formulae. Material and Methods: A number of 150, 72 left and 78 right adult fully ossified dry processed femora were taken. The femur bone was divided into five segments by taking predetermined points. Length of five segments and maximum length of femur were measured to the nearest millimeter. The values were obtained in cm [mean±S.D.] and the mean total length of femora on left and right side was measured. The proportion of segments to the total length was also calculated which will help for the stature estimation using standard regression formulae. Results: The mean total length of femora on left side was 43.54 ± 2.7 and on right side it was 43.42 ± 2.4. The measurements of the segments-1, 2, 3, 4 and 5 were 8.06± 0.71, 8.25± 1.24, 10.35 ± 2.21, 13.94 ± 1.93 and 2.77 ± 0.53 on left side and 8.09 ± 0.70, 8.30 ± 1.34, 10.44 ± 1.91, 13.50 ± 1.54 and 3.09 ± 0.41 on right side of femur. Conclusion: The sample size was 150, 72 left and 78 right and ‘p’ value of all the segments was significant (‹0.001). When comparison was made between segments of right and left femora, the ‘p’ value of segment-5 was found to be ‹0.001. Comparison between different segments of femur showed significance in all the segments. PMID:24298451

  9. Clustering consumers based on trust, confidence and giving behaviour: data-driven model building for charitable involvement in the Australian not-for-profit sector.

    PubMed

    de Vries, Natalie Jane; Reis, Rodrigo; Moscato, Pablo

    2015-01-01

    Organisations in the Not-for-Profit and charity sector face increasing competition to win time, money and efforts from a common donor base. Consequently, these organisations need to be more proactive than ever. The increased level of communications between individuals and organisations today, heightens the need for investigating the drivers of charitable giving and understanding the various consumer groups, or donor segments, within a population. It is contended that `trust' is the cornerstone of the not-for-profit sector's survival, making it an inevitable topic for research in this context. It has become imperative for charities and not-for-profit organisations to adopt for-profit's research, marketing and targeting strategies. This study provides the not-for-profit sector with an easily-interpretable segmentation method based on a novel unsupervised clustering technique (MST-kNN) followed by a feature saliency method (the CM1 score). A sample of 1,562 respondents from a survey conducted by the Australian Charities and Not-for-profits Commission is analysed to reveal donor segments. Each cluster's most salient features are identified using the CM1 score. Furthermore, symbolic regression modelling is employed to find cluster-specific models to predict `low' or `high' involvement in clusters. The MST-kNN method found seven clusters. Based on their salient features they were labelled as: the `non-institutionalist charities supporters', the `resource allocation critics', the `information-seeking financial sceptics', the `non-questioning charity supporters', the `non-trusting sceptics', the `charity management believers' and the `institutionalist charity believers'. Each cluster exhibits their own characteristics as well as different drivers of `involvement'. The method in this study provides the not-for-profit sector with a guideline for clustering, segmenting, understanding and potentially targeting their donor base better. If charities and not-for-profit organisations adopt these strategies, they will be more successful in today's competitive environment.

  10. Clustering Consumers Based on Trust, Confidence and Giving Behaviour: Data-Driven Model Building for Charitable Involvement in the Australian Not-For-Profit Sector

    PubMed Central

    de Vries, Natalie Jane; Reis, Rodrigo; Moscato, Pablo

    2015-01-01

    Organisations in the Not-for-Profit and charity sector face increasing competition to win time, money and efforts from a common donor base. Consequently, these organisations need to be more proactive than ever. The increased level of communications between individuals and organisations today, heightens the need for investigating the drivers of charitable giving and understanding the various consumer groups, or donor segments, within a population. It is contended that `trust' is the cornerstone of the not-for-profit sector's survival, making it an inevitable topic for research in this context. It has become imperative for charities and not-for-profit organisations to adopt for-profit's research, marketing and targeting strategies. This study provides the not-for-profit sector with an easily-interpretable segmentation method based on a novel unsupervised clustering technique (MST-kNN) followed by a feature saliency method (the CM1 score). A sample of 1,562 respondents from a survey conducted by the Australian Charities and Not-for-profits Commission is analysed to reveal donor segments. Each cluster's most salient features are identified using the CM1 score. Furthermore, symbolic regression modelling is employed to find cluster-specific models to predict `low' or `high' involvement in clusters. The MST-kNN method found seven clusters. Based on their salient features they were labelled as: the `non-institutionalist charities supporters', the `resource allocation critics', the `information-seeking financial sceptics', the `non-questioning charity supporters', the `non-trusting sceptics', the `charity management believers' and the `institutionalist charity believers'. Each cluster exhibits their own characteristics as well as different drivers of `involvement'. The method in this study provides the not-for-profit sector with a guideline for clustering, segmenting, understanding and potentially targeting their donor base better. If charities and not-for-profit organisations adopt these strategies, they will be more successful in today's competitive environment. PMID:25849547

  11. Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

    PubMed Central

    Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.

    2017-01-01

    Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J; Gu, X; Lu, W

    Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less

  13. Mapping tissue inhomogeneity in acute myocarditis: a novel analytical approach to quantitative myocardial edema imaging by T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Dick, Anastasia; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2015-12-23

    The purpose of the present study was to investigate the diagnostic value of T2-mapping in acute myocarditis (ACM) and to define cut-off values for edema detection. Cardiovascular magnetic resonance (CMR) data of 31 patients with ACM were retrospectively analyzed. 30 healthy volunteers (HV) served as a control. Additionally to the routine CMR protocol, T2-mapping data were acquired at 1.5 T using a breathhold Gradient-Spin-Echo T2-mapping sequence in six short axis slices. T2-maps were segmented according to the 16-segments AHA-model and segmental T2 values as well as the segmental pixel-standard deviation (SD) were analyzed. Mean differences of global myocardial T2 or pixel-SD between HV and ACM patients were only small, lying in the normal range of HV. In contrast, variation of segmental T2 values and pixel-SD was much larger in ACM patients compared to HV. In random forests and multiple logistic regression analyses, the combination of the highest segmental T2 value within each patient (maxT2) and the mean absolute deviation (MAD) of log-transformed pixel-SD (madSD) over all 16 segments within each patient proved to be the best discriminators between HV and ACM patients with an AUC of 0.85 in ROC-analysis. In classification trees, a combined cut-off of 0.22 for madSD and of 68 ms for maxT2 resulted in 83% specificity and 81% sensitivity for detection of ACM. The proposed cut-off values for maxT2 and madSD in the setting of ACM allow edema detection with high sensitivity and specificity and therefore have the potential to overcome the hurdles of T2-mapping for its integration into clinical routine.

  14. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  15. [Prescription drug consumption recovery following the co-payment change: Evidence from a regional health service].

    PubMed

    Sánchez, Diego P; Guillén, José J; Torres, Alberto M; Arense, Julián J; López, Ángel; Sánchez, Fernando I

    2015-01-01

    In the past few decades, increasing pharmaceutical expenditures in Spain and other western countries led to the adoption of reforms in order to reduce this trend. The aim of our study was to analyze if reforms concerning the pharmaceutical reimbursement scheme in Spain have been associated with changes in the volume and trend of pharmaceutical consumption. Retrospective observational study. Region of Murcia. Prescription drug in primary care and external consultations. Records of prescribed medicines between January 1, 2008 and December 31, 2013. Segmented regression analysis of interrupted time-series of prescription drug consumption. Dispensing of all five therapeutic classes fell immediately after co-payment changes. The segmented regression model suggested that per patient drug consumption in pensioners may have decreased by about 6.76% (95% CI; -8.66% to -5.19%) in the twelve months after the reform, compared with the absence of such a policy. Furthermore the slope of the series of consumption increased from 6.08 (P<.001) to 12.17 (P<.019). The implementation of copayment policies could be associated with a significant decrease in the level of prescribed drug use in Murcia Region, but this effect seems to have been only temporary in the five therapeutic groups analyzed, since almost simultaneously there has been an increase in the growth trend. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  16. Blood manganese as an exposure biomarker: State of the evidence

    PubMed Central

    Baker, Marissa G.; Simpson, Christopher D.; Stover, Bert; Sheppard, Lianne; Checkoway, Harvey; Racette, Brad A.; Seixas, Noah S.

    2014-01-01

    Despite evidence of adverse health effects resulting from exposure to manganese (Mn), biomarkers of exposure are poorly understood. To enhance understanding, mean blood Mn (MnB) and mean air Mn (MnA) were extracted from 63 exposure groups in 24 published papers, and the relationship was modeled using segmented regression. On a log/log scale, a positive association between MnA and MnB was observed among studies reporting MnA concentrations above about 10 μg/m3, although interpretation is limited by largely cross-sectional data, study design variability, and differences in exposure monitoring methods. Based on the results of the segmented regression, we hypothesize that below the concentration of about 10 μg/m3, Mn in the body is dominated by dietary Mn, and additional inhaled Mn only causes negligible changes in Mn levels unless the inhaled amount is substantial. However, stronger study designs are required to account for temporal characteristics of the MnA to MnB relationships which reflect the underlying physiology and toxicokinetics of Mn uptake and distribution. Thus, we present an inception cohort study design we have conducted among apprentice welders, and the analytical strengths this study design offers. To determine if blood could be a useful biomarker for Mn to be utilized by industrial hygienists in general industry requires additional time-specific analyses, which our inception cohort study design will allow. PMID:24579750

  17. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory.

    PubMed

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-07-27

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R(2) of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system.

  18. Gastrointestinal Spatiotemporal mRNA Expression of Ghrelin vs Growth Hormone Receptor and New Growth Yield Machine Learning Model Based on Perturbation Theory

    PubMed Central

    Ran, Tao; Liu, Yong; Li, Hengzhi; Tang, Shaoxun; He, Zhixiong; Munteanu, Cristian R.; González-Díaz, Humberto; Tan, Zhiliang; Zhou, Chuanshe

    2016-01-01

    The management of ruminant growth yield has economic importance. The current work presents a study of the spatiotemporal dynamic expression of Ghrelin and GHR at mRNA levels throughout the gastrointestinal tract (GIT) of kid goats under housing and grazing systems. The experiments show that the feeding system and age affected the expression of either Ghrelin or GHR with different mechanisms. Furthermore, the experimental data are used to build new Machine Learning models based on the Perturbation Theory, which can predict the effects of perturbations of Ghrelin and GHR mRNA expression on the growth yield. The models consider eight longitudinal GIT segments (rumen, abomasum, duodenum, jejunum, ileum, cecum, colon and rectum), seven time points (0, 7, 14, 28, 42, 56 and 70 d) and two feeding systems (Supplemental and Grazing feeding) as perturbations from the expected values of the growth yield. The best regression model was obtained using Random Forest, with the coefficient of determination R2 of 0.781 for the test subset. The current results indicate that the non-linear regression model can accurately predict the growth yield and the key nodes during gastrointestinal development, which is helpful to optimize the feeding management strategies in ruminant production system. PMID:27460882

  19. Determining a Prony Series for a Viscoelastic Material From Time Varying Strain Data

    NASA Technical Reports Server (NTRS)

    Tzikang, Chen

    2000-01-01

    In this study a method of determining the coefficients in a Prony series representation of a viscoelastic modulus from rate dependent data is presented. Load versus time test data for a sequence of different rate loading segments is least-squares fitted to a Prony series hereditary integral model of the material tested. A nonlinear least squares regression algorithm is employed. The measured data includes ramp loading, relaxation, and unloading stress-strain data. The resulting Prony series which captures strain rate loading and unloading effects, produces an excellent fit to the complex loading sequence.

  20. New method for calculating a mathematical expression for streamflow recession

    USGS Publications Warehouse

    Rutledge, Albert T.

    1991-01-01

    An empirical method has been devised to calculate the master recession curve, which is a mathematical expression for streamflow recession during times of negligible direct runoff. The method is based on the assumption that the storage-delay factor, which is the time per log cycle of streamflow recession, varies linearly with the logarithm of streamflow. The resulting master recession curve can be nonlinear. The method can be executed by a computer program that reads a data file of daily mean streamflow, then allows the user to select several near-linear segments of streamflow recession. The storage-delay factor for each segment is one of the coefficients of the equation that results from linear least-squares regression. Using results for each recession segment, a mathematical expression of the storage-delay factor as a function of the log of streamflow is determined by linear least-squares regression. The master recession curve, which is a second-order polynomial expression for time as a function of log of streamflow, is then derived using the coefficients of this function.

  1. Using network screening methods to determine locations with specific safety issues: A design consistency case study.

    PubMed

    Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P

    2017-09-01

    The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. [Estimating individual tree aboveground biomass of the mid-subtropical forest using airborne LiDAR technology].

    PubMed

    Liu, Feng; Tan, Chang; Lei, Pi-Feng

    2014-11-01

    Taking Wugang forest farm in Xuefeng Mountain as the research object, using the airborne light detection and ranging (LiDAR) data under leaf-on condition and field data of concomitant plots, this paper assessed the ability of using LiDAR technology to estimate aboveground biomass of the mid-subtropical forest. A semi-automated individual tree LiDAR cloud point segmentation was obtained by using condition random fields and optimization methods. Spatial structure, waveform characteristics and topography were calculated as LiDAR metrics from the segmented objects. Then statistical models between aboveground biomass from field data and these LiDAR metrics were built. The individual tree recognition rates were 93%, 86% and 60% for coniferous, broadleaf and mixed forests, respectively. The adjusted coefficients of determination (R(2)adj) and the root mean squared errors (RMSE) for the three types of forest were 0.83, 0.81 and 0.74, and 28.22, 29.79 and 32.31 t · hm(-2), respectively. The estimation capability of model based on canopy geometric volume, tree percentile height, slope and waveform characteristics was much better than that of traditional regression model based on tree height. Therefore, LiDAR metrics from individual tree could facilitate better performance in biomass estimation.

  3. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.

  4. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    PubMed Central

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-01-01

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430

  5. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    PubMed

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  6. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  7. Comparison of two approaches for calculation of the geometric and inertial characteristics of the human body of the Bulgarian population.

    PubMed

    Nikolova, Gergana; Toshev, Yuli

    2008-01-01

    On the basis of a representative anthropological investigation of 5290 individuals (2435 males and 2855 females) of the Bulgarian population at the age of 30-40 years (Yordanov et al. [1]) we proposed a 3D biomechanical model of human body of the average Bulgarian male and female and compared two different possible approaches to calculate analytically and to evaluate numerically the corresponding geometric and inertial characteristics of all the segments of the body. In the framework of the first approach, we calculated the positions of the centres of mass of the segments of human body as well as their inertial characteristics merely by using the initial original anthropometrical data, while in the second approach we adjusted the data by using the method based on regression equations. Wherever possible, we presented a comparison of our data with those available in the literature on other Caucasians and determined in which cases the use of which approach is more reliable.

  8. Variable Selection for Road Segmentation in Aerial Images

    NASA Astrophysics Data System (ADS)

    Warnke, S.; Bulatov, D.

    2017-05-01

    For extraction of road pixels from combined image and elevation data, Wegner et al. (2015) proposed classification of superpixels into road and non-road, after which a refinement of the classification results using minimum cost paths and non-local optimization methods took place. We believed that the variable set used for classification was to a certain extent suboptimal, because many variables were redundant while several features known as useful in Photogrammetry and Remote Sensing are missed. This motivated us to implement a variable selection approach which builds a model for classification using portions of training data and subsets of features, evaluates this model, updates the feature set, and terminates when a stopping criterion is satisfied. The choice of classifier is flexible; however, we tested the approach with Logistic Regression and Random Forests, and taylored the evaluation module to the chosen classifier. To guarantee a fair comparison, we kept the segment-based approach and most of the variables from the related work, but we extended them by additional, mostly higher-level features. Applying these superior features, removing the redundant ones, as well as using more accurately acquired 3D data allowed to keep stable or even to reduce the misclassification error in a challenging dataset.

  9. Segmentation of singularity maps in the context of soil porosity

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.

    2016-04-01

    Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).

  10. An ex post facto evaluation framework for place-based police interventions.

    PubMed

    Braga, Anthony A; Hureau, David M; Papachristos, Andrew V

    2011-12-01

    A small but growing body of research evidence suggests that place-based police interventions generate significant crime control gains. While place-based policing strategies have been adopted by a majority of U.S. police departments, very few agencies make a priori commitments to rigorous evaluations. Recent methodological developments were applied to conduct a rigorous ex post facto evaluation of the Boston Police Department's Safe Street Team (SST) hot spots policing program. A nonrandomized quasi-experimental design was used to evaluate the violent crime control benefits of the SST program at treated street segments and intersections relative to untreated street segments and intersections. Propensity score matching techniques were used to identify comparison places in Boston. Growth curve regression models were used to analyze violent crime trends at treatment places relative to control places. UNITS OF ANALYSIS: Using computerized mapping and database software, a micro-level place database of violent index crimes at all street segments and intersections in Boston was created. Yearly counts of violent index crimes between 2000 and 2009 at the treatment and comparison street segments and intersections served as the key outcome measure. The SST program was associated with a statistically significant reduction in violent index crimes at the treatment places relative to the comparison places without displacing crime into proximate areas. To overcome the challenges of evaluation in real-world settings, evaluators need to continuously develop innovative approaches that take advantage of new theoretical and methodological approaches.

  11. A new fractional order derivative based active contour model for colon wall segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong

    2018-02-01

    Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.

  12. Complete regression of myocardial involvement associated with lymphoma following chemotherapy

    PubMed Central

    Vinicki, Juan Pablo; Cianciulli, Tomás F; Farace, Gustavo A; Saccheri, María C; Lax, Jorge A; Kazelian, Lucía R; Wachs, Adolfo

    2013-01-01

    Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence. We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an echocardiogram, presenting with a testicular mass and unilateral peripheral facial paralysis. On admission, electrocardiograms (ECG) revealed negative T-waves in all leads and ST-segment elevation in the inferior leads. On two-dimensional echocardiography, there was infiltration of the pericardium with mild effusion, infiltrative thickening of the aortic walls, both atria and the interatrial septum and a mildly depressed systolic function of both ventricles. An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma (T-LBL). Following the diagnosis and staging, chemotherapy was started. Twenty-two days after finishing the first cycle of chemotherapy, the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads. A follow-up Two-dimensional echocardiography confirmed regression of the myocardial infiltration. This case report illustrates a lymphoma presenting with testicular mass, unilateral peripheral facial paralysis and myocardial involvement, and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment. To our knowledge, there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis, with complete regression of myocardial involvement. PMID:24109501

  13. DRhoGEF2 and Diaphanous Regulate Contractile Force during Segmental Groove Morphogenesis in the Drosophila Embryo

    PubMed Central

    Mulinari, Shai; Barmchi, Mojgan Padash

    2008-01-01

    Morphogenesis of the Drosophila embryo is associated with dynamic rearrangement of the actin cytoskeleton mediated by small GTPases of the Rho family. These GTPases act as molecular switches that are activated by guanine nucleotide exchange factors. One of these factors, DRhoGEF2, plays an important role in the constriction of actin filaments during pole cell formation, blastoderm cellularization, and invagination of the germ layers. Here, we show that DRhoGEF2 is equally important during morphogenesis of segmental grooves, which become apparent as tissue infoldings during mid-embryogenesis. Examination of DRhoGEF2-mutant embryos indicates a role for DRhoGEF2 in the control of cell shape changes during segmental groove morphogenesis. Overexpression of DRhoGEF2 in the ectoderm recruits myosin II to the cell cortex and induces cell contraction. At groove regression, DRhoGEF2 is enriched in cells posterior to the groove that undergo apical constriction, indicating that groove regression is an active process. We further show that the Formin Diaphanous is required for groove formation and strengthens cell junctions in the epidermis. Morphological analysis suggests that Dia regulates cell shape in a way distinct from DRhoGEF2. We propose that DRhoGEF2 acts through Rho1 to regulate acto-myosin constriction but not Diaphanous-mediated F-actin nucleation during segmental groove morphogenesis. PMID:18287521

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios Velazquez, E; Parmar, C; Narayan, V

    Purpose: To compare the complementary value of quantitative radiomic features to that of radiologist-annotated semantic features in predicting EGFR mutations in lung adenocarcinomas. Methods: Pre-operative CT images of 258 lung adenocarcinoma patients were available. Tumors were segmented using the sing-click ensemble segmentation algorithm. A set of radiomic features was extracted using 3D-Slicer. Test-retest reproducibility and unsupervised dimensionality reduction were applied to select a subset of reproducible and independent radiomic features. Twenty semantic annotations were scored by an expert radiologist, describing the tumor, surrounding tissue and associated findings. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative radiomic and semantic featuresmore » in 172 patients (training-set, temporal split). Radiomic, semantic and combined radiomic-semantic logistic regression models to predict EGFR mutations were evaluated in and independent validation dataset of 86 patients using the area under the receiver operating curve (AUC). Results: EGFR mutations were found in 77/172 (45%) and 39/86 (45%) of the training and validation sets, respectively. Univariate AUCs showed a similar range for both feature types: radiomics median AUC = 0.57 (range: 0.50 – 0.62); semantic median AUC = 0.53 (range: 0.50 – 0.64, Wilcoxon p = 0.55). After MRMR feature selection, the best-performing radiomic, semantic, and radiomic-semantic logistic regression models, for EGFR mutations, showed a validation AUC of 0.56 (p = 0.29), 0.63 (p = 0.063) and 0.67 (p = 0.004), respectively. Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative radiologist annotations. The prognostic value of informative qualitative semantic features such as cavitation and lobulation is increased with the addition of quantitative textural features from the tumor region.« less

  15. Trends in Global Vegetation Activity and Climatic Drivers Indicate a Decoupled Response to Climate Change.

    PubMed

    Schut, Antonius G T; Ivits, Eva; Conijn, Jacob G; Ten Brink, Ben; Fensholt, Rasmus

    2015-01-01

    Detailed understanding of a possible decoupling between climatic drivers of plant productivity and the response of ecosystems vegetation is required. We compared trends in six NDVI metrics (1982-2010) derived from the GIMMS3g dataset with modelled biomass productivity and assessed uncertainty in trend estimates. Annual total biomass weight (TBW) was calculated with the LINPAC model. Trends were determined using a simple linear regression, a Thiel-Sen medium slope and a piecewise regression (PWR) with two segments. Values of NDVI metrics were related to Net Primary Production (MODIS-NPP) and TBW per biome and land-use type. The simple linear and Thiel-Sen trends did not differ much whereas PWR increased the fraction of explained variation, depending on the NDVI metric considered. A positive trend in TBW indicating more favorable climatic conditions was found for 24% of pixels on land, and for 5% a negative trend. A decoupled trend, indicating positive TBW trends and monotonic negative or segmented and negative NDVI trends, was observed for 17-36% of all productive areas depending on the NDVI metric used. For only 1-2% of all pixels in productive areas, a diverging and greening trend was found despite a strong negative trend in TBW. The choice of NDVI metric used strongly affected outcomes on regional scales and differences in the fraction of explained variation in MODIS-NPP between biomes were large, and a combination of NDVI metrics is recommended for global studies. We have found an increasing difference between trends in climatic drivers and observed NDVI for large parts of the globe. Our findings suggest that future scenarios must consider impacts of constraints on plant growth such as extremes in weather and nutrient availability to predict changes in NPP and CO2 sequestration capacity.

  16. The Dipole Segment Model for Axisymmetrical Elongated Asteroids

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong

    2018-02-01

    Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.

  17. Lithospheric buckling and intra-arc stresses: A mechanism for arc segmentation

    NASA Technical Reports Server (NTRS)

    Nelson, Kerri L.

    1989-01-01

    Comparison of segment development of a number of arcs has shown that consistent relationships between segmentation, volcanism and variable stresses exists. Researchers successfully modeled these relationships using the conceptual model of lithospheric buckling of Yamaoka et al. (1986; 1987). Lithosphere buckling (deformation) provides the needed mechanism to explain segmentation phenomenon; offsets in volcanic fronts, distribution of calderas within segments, variable segment stresses and the chemical diversity seen between segment boundary and segment interior magmas.

  18. Pulse oximetry recorded from the Phone Oximeter for detection of obstructive sleep apnea events with and without oxygen desaturation in children.

    PubMed

    Garde, Ainara; Dehkordi, Parastoo; Wensley, David; Ansermino, J Mark; Dumont, Guy A

    2015-01-01

    Obstructive sleep apnea (OSA) disrupts normal ventilation during sleep and can lead to serious health problems in children if left untreated. Polysomnography, the gold standard for OSA diagnosis, is resource intensive and requires a specialized laboratory. Thus, we proposed to use the Phone Oximeter™, a portable device integrating pulse oximetry with a smartphone, to detect OSA events. As a proportion of OSA events occur without oxygen desaturation (defined as SpO2 decreases ≥ 3%), we suggest combining SpO2 and pulse rate variability (PRV) analysis to identify all OSA events and provide a more detailed sleep analysis. We recruited 160 children and recorded pulse oximetry consisting of SpO2 and plethysmography (PPG) using the Phone Oximeter™, alongside standard polysomnography. A sleep technician visually scored all OSA events with and without oxygen desaturation from polysomnography. We divided pulse oximetry signals into 1-min signal segments and extracted several features from SpO2 and PPG analysis in the time and frequency domain. Segments with OSA, especially the ones with oxygen desaturation, presented greater SpO2 variability and modulation reflected in the spectral domain than segments without OSA. Segments with OSA also showed higher heart rate and sympathetic activity through the PRV analysis relative to segments without OSA. PRV analysis was more sensitive than SpO2 analysis for identification of OSA events without oxygen desaturation. Combining SpO2 and PRV analysis enhanced OSA event detection through a multiple logistic regression model. The area under the ROC curve increased from 81% to 87%. Thus, the Phone Oximeter™ might be useful to monitor sleep and identify OSA events with and without oxygen desaturation at home.

  19. Effect of Emphysema on CT Scan Measures of Airway Dimensions in Smokers

    PubMed Central

    Han, MeiLan K.; Come, Carolyn E.; San José Estépar, Raúl; Ross, James C.; Kim, Victor; Dransfield, Mark T.; Curran-Everett, Douglas; Schroeder, Joyce D.; Lynch, David A.; Tschirren, Juerg; Silverman, Edwin K.; Washko, George R.

    2013-01-01

    Background: In CT scans of smokers with COPD, the subsegmental airway wall area percent (WA%) is greater and more strongly correlated with FEV1 % predicted than WA% obtained in the segmental airways. Because emphysema is linked to loss of airway tethering and may limit airway expansion, increases in WA% may be related to emphysema and not solely to remodeling. We aimed to first determine whether the stronger association of subsegmental vs segmental WA% with FEV1 % predicted is mitigated by emphysema and, second, to assess the relationships among emphysema, WA%, and total bronchial area (TBA). Methods: We analyzed CT scan segmental and subsegmental WA% (WA% = 100 × wall area/TBA) of six bronchial paths and corresponding lobar emphysema, lung function, and clinical data in 983 smokers with COPD. Results: Compared with segmental WA%, the subsegmental WA% had a greater effect on FEV1% predicted (−0.8% to −1.7% vs −1.9% to −2.6% per 1-unit increase in WA%, respectively; P < .05 for most bronchial paths). After adjusting for emphysema, the association between subsegmental WA% and FEV1 % predicted was weakened in two bronchial paths. Increases in WA% between bronchial segments correlated directly with emphysema in all bronchial paths (P < .05). In multivariate regression models, emphysema was directly related to subsegmental WA% in most bronchial paths and inversely related to subsegmental TBA in all bronchial paths. Conclusion: The greater effect of subsegmental WA% on airflow obstruction is mitigated by emphysema. Part of the emphysema effect might be due to loss of airway tethering, leading to a reduction in TBA and an increase in WA%. Trial registry: ClinicalTrials.gov; No.: NCT00608764; URL: www.clinicaltrials.gov PMID:23460155

  20. Do 3D Printing Models Improve Anatomical Teaching About Hepatic Segments to Medical Students? A Randomized Controlled Study.

    PubMed

    Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Huang, Wenhua; Li, Jianyi

    2016-08-01

    It is a difficult and frustrating task for young surgeons and medical students to understand the anatomy of hepatic segments. We tried to develop an optimal 3D printing model of hepatic segments as a teaching aid to improve the teaching of hepatic segments. A fresh human cadaveric liver without hepatic disease was CT scanned. After 3D reconstruction, three types of 3D computer models of hepatic structures were designed and 3D printed as models of hepatic segments without parenchyma (type 1) and with transparent parenchyma (type 2), and hepatic ducts with segmental partitions (type 3). These models were evaluated by six experts using a five-point Likert scale. Ninety two medical freshmen were randomized into four groups to learn hepatic segments with the aid of the three types of models and traditional anatomic atlas (TAA). Their results of two quizzes were compared to evaluate the teaching effects of the four methods. Three types of models were successful produced which displayed the structures of hepatic segments. By experts' evaluation, type 3 model was better than type 1 and 2 models in anatomical condition, type 2 and 3 models were better than type 1 model in tactility, and type 3 model was better than type 1 model in overall satisfaction (P < 0.05). The first quiz revealed that type 1 model was better than type 2 model and TAA, while type 3 model was better than type 2 and TAA in teaching effects (P < 0.05). The second quiz found that type 1 model was better than TAA, while type 3 model was better than type 2 model and TAA regarding teaching effects (P < 0.05). Only TAA group had significant declines between two quizzes (P < 0.05). The model with segmental partitions proves to be optimal, because it can best improve anatomical teaching about hepatic segments.

  1. A Disadvantaged Advantage in Walkability: Findings from ...

    EPA Pesticide Factsheets

    Urban form-the structure of the built environment-can influence physical activity, yet little is known about how walkable design differs according to neighborhood sociodemographic composition. We studied how walkable urban form varies by neighborhood sociodemographic composition, region, and urbanicity across the United States. Using linear regression models and 2000-2001 US Census data, we investigated the relationship between 5 neighborhood census characteristics (income, education, racial/ethnic composition, age distribution, and sex) and 5 walkability indicators in almost 65,000 census tracts in 48 states and the District of Columbia. Data on the built environment were obtained from the RAND Corporation's (Santa Monica, California) Center for Population Health and Health Disparities (median block length, street segment, and node density) and the US Geological Survey's National Land Cover Database (proportion open space and proportion highly developed). Disadvantaged neighborhoods and those with more educated residents were more walkable (i.e., shorter block length, greater street node density, more developed land use, and higher density of street segments). However, tracts with a higher proportion of children and older adults were less walkable (fewer street nodes and lower density of street segments), after adjustment for region and level of urbanicity. Research and policy on the walkability-health link should give nuanced attention to the gap between perso

  2. Digital vibration threshold testing and ergonomic stressors in automobile manufacturing workers: a cross-sectional assessment.

    PubMed

    Gold, J E; Punnett, L; Cherniack, M; Wegman, D H

    2005-01-01

    Upper extremity musculoskeletal disorders (UEMSDs) comprise a large proportion of work-related illnesses in the USA. Physical risk factors including manual force and segmental vibration have been associated with UEMSDs. Reduced sensitivity to vibration in the fingertips (a function of nerve integrity) has been found in those exposed to segmental vibration, to hand force, and in office workers. The objective of this study was to determine whether an association exists between digital vibration thresholds (VTs) and exposure to ergonomic stressors in automobile manufacturing. Interviews and physical examinations were conducted in a cross-sectional survey of workers (n = 1174). In multivariable robust regression modelling, associations with workers' estimates of ergonomic stressors stratified on tool use were determined. VTs were separately associated with hand force, vibration as felt through the floor (whole body vibration), and with an index of multiple exposures in both tool users and non-tool users. Additional associations with contact stress and awkward upper extremity postures were found in tool users. Segmental vibration was not associated with VTs. Further epidemiologic and laboratory studies are needed to confirm the associations found. The association with self-reported whole body vibration exposure suggests a possible sympathetic nervous system effect, which remains to be explored.

  3. Does Segmentation Really Work? Effectiveness of Matched Graphic Health Warnings on Cigarette Packaging by Race, Gender and Chronic Disease Conditions on Cognitive Outcomes among Vulnerable Populations.

    PubMed

    Hayashi, Hana; Tan, Andy; Kawachi, Ichiro; Minsky, Sara; Viswanath, Kasisomayajula

    2018-06-18

    We examined the differential impact of exposure to smoking-related graphic health warnings (GHWs) on risk perceptions and intentions to quit among different audience segments characterized by gender, race/ethnic group, and presence of chronic disease condition. Specifically, we sought to test whether GHWs that portray specific groups (in terms of gender, race, and chronic disease conditions) are associated with differences in risk perception and intention to quit among smokers who match the portrayed group. We used data from Project CLEAR, which oversampled lower SES groups as well as race/ethnic minority groups living in the Greater Boston area (n = 565). We fitted multiple linear regression models to examine the impact of exposure to different GHWs on risk perceptions and quit intentions. After controlling for age, gender, education and household income, we found that women who viewed GHWs portraying females reported increased risk perception as compared to women who viewed GHWs portraying men. However, no other interactions were found between the groups depicted in GHWs and audience characteristics. The findings suggest that audience segmentation of GHWs may have limited impact on risk perceptions and intention to quit smoking among adult smokers.

  4. CLASSICAL AREAS OF PHENOMENOLOGY: Study on the design and Zernike aberrations of a segmented mirror telescope

    NASA Astrophysics Data System (ADS)

    Jiang, Zhen-Yu; Li, Lin; Huang, Yi-Fan

    2009-07-01

    The segmented mirror telescope is widely used. The aberrations of segmented mirror systems are different from single mirror systems. This paper uses the Fourier optics theory to analyse the Zernike aberrations of segmented mirror systems. It concludes that the Zernike aberrations of segmented mirror systems obey the linearity theorem. The design of a segmented space telescope and segmented schemes are discussed, and its optical model is constructed. The computer simulation experiment is performed with this optical model to verify the suppositions. The experimental results confirm the correctness of the model.

  5. Statistical shape (ASM) and appearance (AAM) models for the segmentation of the cerebellum in fetal ultrasound

    NASA Astrophysics Data System (ADS)

    Reyes López, Misael; Arámbula Cosío, Fernando

    2017-11-01

    The cerebellum is an important structure to determine the gestational age of the fetus, moreover most of the abnormalities it presents are related to growth disorders. In this work, we present the results of the segmentation of the fetal cerebellum applying statistical shape and appearance models. Both models were tested on ultrasound images of the fetal brain taken from 23 pregnant women, between 18 and 24 gestational weeks. The accuracy results obtained on 11 ultrasound images show a mean Hausdorff distance of 6.08 mm between the manual segmentation and the segmentation using active shape model, and a mean Hausdorff distance of 7.54 mm between the manual segmentation and the segmentation using active appearance model. The reported results demonstrate that the active shape model is more robust in the segmentation of the fetal cerebellum in ultrasound images.

  6. Hidden marker position estimation during sit-to-stand with walker.

    PubMed

    Yoon, Sang Ho; Jun, Hong Gul; Dan, Byung Ju; Jo, Byeong Rim; Min, Byung Hoon

    2012-01-01

    Motion capture analysis of sit-to-stand task with assistive device is hard to achieve due to obstruction on reflective makers. Previously developed robotic system, Smart Mobile Walker, is used as an assistive device to perform motion capture analysis in sit-to-stand task. All lower limb markers except hip markers are invisible through whole session. The link-segment and regression method is applied to estimate the marker position during sit-to-stand. Applying a new method, the lost marker positions are restored and the biomechanical evaluation of the sit-to-stand movement with a Smart Mobile Walker could be carried out. The accuracy of the marker position estimation is verified with normal sit-to-stand data from more than 30 clinical trials. Moreover, further research on improving the link segment and regression method is addressed.

  7. Nonlinear estimation of parameters in biphasic Arrhenius plots.

    PubMed

    Puterman, M L; Hrboticky, N; Innis, S M

    1988-05-01

    This paper presents a formal procedure for the statistical analysis of data on the thermotropic behavior of membrane-bound enzymes generated using the Arrhenius equation and compares the analysis to several alternatives. Data is modeled by a bent hyperbola. Nonlinear regression is used to obtain estimates and standard errors of the intersection of line segments, defined as the transition temperature, and slopes, defined as energies of activation of the enzyme reaction. The methodology allows formal tests of the adequacy of a biphasic model rather than either a single straight line or a curvilinear model. Examples on data concerning the thermotropic behavior of pig brain synaptosomal acetylcholinesterase are given. The data support the biphasic temperature dependence of this enzyme. The methodology represents a formal procedure for statistical validation of any biphasic data and allows for calculation of all line parameters with estimates of precision.

  8. A model to identify high crash road segments with the dynamic segmentation method.

    PubMed

    Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan

    2014-12-01

    Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Biomechanical effects of hybrid stabilization on the risk of proximal adjacent-segment degeneration following lumbar spinal fusion using an interspinous device or a pedicle screw-based dynamic fixator.

    PubMed

    Lee, Chang-Hyun; Kim, Young Eun; Lee, Hak Joong; Kim, Dong Gyu; Kim, Chi Heon

    2017-12-01

    OBJECTIVE Pedicle screw-rod-based hybrid stabilization (PH) and interspinous device-based hybrid stabilization (IH) have been proposed to prevent adjacent-segment degeneration (ASD) and their effectiveness has been reported. However, a comparative study based on sound biomechanical proof has not yet been reported. The aim of this study was to compare the biomechanical effects of IH and PH on the transition and adjacent segments. METHODS A validated finite element model of the normal lumbosacral spine was used. Based on the normal model, a rigid fusion model was immobilized at the L4-5 level by a rigid fixator. The DIAM or NFlex model was added on the L3-4 segment of the fusion model to construct the IH and PH models, respectively. The developed models simulated 4 different loading directions using the hybrid loading protocol. RESULTS Compared with the intact case, fusion on L4-5 produced 18.8%, 9.3%, 11.7%, and 13.7% increments in motion at L3-4 under flexion, extension, lateral bending, and axial rotation, respectively. Additional instrumentation at L3-4 (transition segment) in hybrid models reduced motion changes at this level. The IH model showed 8.4%, -33.9%, 6.9%, and 2.0% change in motion at the segment, whereas the PH model showed -30.4%, -26.7%, -23.0%, and 12.9%. At L2-3 (adjacent segment), the PH model showed 14.3%, 3.4%, 15.0%, and 0.8% of motion increment compared with the motion in the IH model. Both hybrid models showed decreased intradiscal pressure (IDP) at the transition segment compared with the fusion model, but the pressure at L2-3 (adjacent segment) increased in all loading directions except under extension. CONCLUSIONS Both IH and PH models limited excessive motion and IDP at the transition segment compared with the fusion model. At the segment adjacent to the transition level, PH induced higher stress than IH model. Such differences may eventually influence the likelihood of ASD.

  10. Interactive Tooth Separation from Dental Model Using Segmentation Field

    PubMed Central

    2016-01-01

    Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266

  11. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  12. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results.

    PubMed

    Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.

  13. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  14. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  15. Using Predictability for Lexical Segmentation.

    PubMed

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  16. Words in Puddles of Sound: Modelling Psycholinguistic Effects in Speech Segmentation

    ERIC Educational Resources Information Center

    Monaghan, Padraic; Christiansen, Morten H.

    2010-01-01

    There are numerous models of how speech segmentation may proceed in infants acquiring their first language. We present a framework for considering the relative merits and limitations of these various approaches. We then present a model of speech segmentation that aims to reveal important sources of information for speech segmentation, and to…

  17. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    PubMed

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  18. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    NASA Astrophysics Data System (ADS)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  19. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung

    PubMed Central

    Guo, Shengwen; Fei, Baowei

    2013-01-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531

  20. Medial extrusion of the posterior segment of medial meniscus is a sensitive sign for posterior horn tears.

    PubMed

    Ohishi, Tsuyoshi; Suzuki, Daisuke; Yamamoto, Kazufumi; Banno, Tomohiro; Shimizu, Yuta; Matsuyama, Yukihiro

    2014-01-01

    To evaluate medial extrusion of the posterior segment of the medial meniscus in posterior horn tears. This study enrolled 72 patients without medial meniscal tears (group N), 72 patients with medial meniscal tears without posterior horn tears (group PH-), 44 patients with posterior horn tears of the medial meniscus (group PH+). All meniscal tears were confirmed by arthroscopy. Medial extrusion of the middle segment and the posterior segment was measured on coronal MRIs. Extrusions of both middle and posterior segments in groups PH- and PH+ (middle segment; 2.94±1.51 mm for group PH- and 3.75±1.69 mm for group PH+, posterior segment; 1.85±1.82 mm for group PH- and 4.59±2.74 mm for group PH+) were significantly larger than those in group N (middle segment; 2.04±1.20, posterior segment; 1.21±1.86). Both indicators of extrusion in group PH+ were larger than those in group PH-. In the early OA category, neither middle nor posterior segment in group PH- extruded more than in group N. However, only the posterior segment in group PH+ extruded significantly more than in group N. Multiple lineal regression analyses revealed that posterior segment extrusion was strongly correlated with the posterior horn tears (p<0.001) among groups PH- and PH+. The newly presented indicator for extrusion of the posterior segment of the medial meniscus is associated with posterior horn tears in comparison with the extrusion of the middle segment, especially in the early stages of osteoarthritis. Level II--Diagnostic Study. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Inequality and adolescent violence: an exploration of community, family, and individual factors.

    PubMed Central

    Bruce, Marino A.

    2004-01-01

    PURPOSE: The study seeks to examine whether the relationships among community, family, individual factors, and violent behavior are parallel across race- and gender-specific segments of the adolescent population. METHODS: Data from the National Longitudinal Study of Adolescent Health are analyzed to highlight the complex relationships between inequality, community, family, individual behavior, and violence. RESULTS: The results from robust regression analysis provide evidence that social environmental factors can influence adolescent violence in race- and gender-specific ways. CONCLUSIONS: Findings from this study establish the plausibility of multidimensional models that specify a complex relationship between inequality and adolescent violence. PMID:15101669

  2. Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions

    NASA Astrophysics Data System (ADS)

    Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga

    2015-03-01

    Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.

  3. Brain MR image segmentation using NAMS in pseudo-color.

    PubMed

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  4. Using water-quality profiles to characterize seasonal water quality and loading in the upper Animas River basin, southwestern Colorado

    USGS Publications Warehouse

    Leib, Kenneth J.; Mast, M. Alisa; Wright, Winfield G.

    2003-01-01

    One of the important types of information needed to characterize water quality in streams affected by historical mining is the seasonal pattern of toxic trace-metal concentrations and loads. Seasonal patterns in water quality are estimated in this report using a technique called water-quality profiling. Water-quality profiling allows land managers and scientists to assess priority areas to be targeted for characterization and(or) remediation by quantifying the timing and magnitude of contaminant occurrence. Streamflow and water-quality data collected at 15 sites in the upper Animas River Basin during water years 1991?99 were used to develop water-quality profiles. Data collected at each sampling site were used to develop ordinary least-squares regression models for streamflow and constituent concentrations. Streamflow was estimated by correlating instantaneous streamflow measured at ungaged sites with continuous streamflow records from streamflow-gaging stations in the subbasin. Water-quality regression models were developed to estimate hardness and dissolved cadmium, copper, and zinc concentrations based on streamflow and seasonal terms. Results from the regression models were used to calculate water-quality profiles for streamflow, constituent concentrations, and loads. Quantification of cadmium, copper, and zinc loads in a stream segment in Mineral Creek (sites M27 to M34) was presented as an example application of water-quality profiling. The application used a method of mass accounting to quantify the portion of metal loading in the segment derived from uncharacterized sources during different seasonal periods. During May, uncharacterized sources contributed nearly 95 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 85 percent of the zinc load at M34. During September, uncharacterized sources contributed about 86 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 52 percent of the zinc load at M34. Characterized sources accounted for more of the loading gains estimated in the example reach during September, possibly indicating the presence of diffuse inputs during snowmelt runoff. The results indicate that metal sources in the upper Animas River Basin may change substantially with season, regardless of the source.

  5. IMU-to-Segment Assignment and Orientation Alignment for the Lower Body Using Deep Learning

    PubMed Central

    2018-01-01

    Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles. Existing approaches for I2S assignment usually rely on hand crafted features and shallow classification approaches (e.g., support vector machines), with no agreement regarding the most suitable features for the assignment task. Moreover, estimating the complete orientation alignment of an IMU relative to the segment it is attached to using a machine learning approach has not been shown in literature so far. This is likely due to the high amount of training data that have to be recorded to suitably represent possible IMU alignment variations. In this work, we propose online approaches for solving the assignment and alignment tasks for an arbitrary amount of IMUs with respect to a biomechanical lower body model using a deep learning architecture and windows of 128 gyroscope and accelerometer data samples. For this, we combine convolutional neural networks (CNNs) for local filter learning with long-short-term memory (LSTM) recurrent networks as well as generalized recurrent units (GRUs) for learning time dynamic features. The assignment task is casted as a classification problem, while the alignment task is casted as a regression problem. In this framework, we demonstrate the feasibility of augmenting a limited amount of real IMU training data with simulated alignment variations and IMU data for improving the recognition/estimation accuracies. With the proposed approaches and final models we achieved 98.57% average accuracy over all segments for the I2S assignment task (100% when excluding left/right switches) and an average median angle error over all segments and axes of 2.91° for the I2S alignment task. PMID:29351262

  6. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Electrocardiography cannot reliably differentiate transient left ventricular apical ballooning syndrome from anterior ST-segment elevation myocardial infarction.

    PubMed

    Bybee, Kevin A; Motiei, Arashk; Syed, Imran S; Kara, Tomas; Prasad, Abhiram; Lennon, Ryan J; Murphy, Joseph G; Hammill, Stephen C; Rihal, Charanjit S; Wright, R Scott

    2007-01-01

    The presentation and electrocardiographic (ECG) characteristics of transient left ventricular apical ballooning syndrome (TLVABS) can be similar to that of anterior ST-segment elevation myocardial infarction (STEMI). We tested the hypothesis that the ECG on presentation could reliably differentiate these syndromes. Between January 1, 2002 and July 31, 2004, we identified 18 consecutive patients with TLVABS who were matched with 36 subjects presenting with acute anterior STEMI due to atherothrombotic left anterior descending coronary artery occlusion. All patients with TLVABS were women (mean age, 72.0 +/- 13.1 years). The heart rate, PR interval, QRS duration, and corrected QT interval were similar between groups. Distribution of ST elevation was similar, but patients with anterior STEMI exhibited greater ST elevation. Regressive partitioning analysis indicated that the combination of ST elevation in lead V2 of less than 1.75 mm and ST-segment elevation in lead V3 of less than 2.5 mm was a suggestive predictor of TLVABS (sensitivity, 67%; specificity, 94%). Conditional logistic regression indicated that the formula: (3 x ST-elevation lead V2) + (ST-elevation V3) + (2 x ST-elevation V5) allowed possible discrimination between TLVABS and anterior STEMI with an optimal cutoff level of less than 11.5 mm for TLVABS (sensitivity, 94%; specificity, 72%). Patients with TLVABS were less likely to have concurrent ST-segment depression (6% vs 44%; P = .003). Women presenting with TLVABS have similar ECG findings to patients with anterior infarct but with less-prominent ST-segment elevation in the anterior precordial ECG leads. These ECG findings are relatively subtle and do not have sufficient predictive value to allow reliable emergency differentiation of these syndromes.

  8. The effect of rising vs. falling glucose level on amperometric glucose sensor lag and accuracy in Type 1 diabetes.

    PubMed

    Ward, W K; Engle, J M; Branigan, D; El Youssef, J; Massoud, R G; Castle, J R

    2012-08-01

    Because declining glucose levels should be detected quickly in persons with Type 1 diabetes, a lag between blood glucose and subcutaneous sensor glucose can be problematic. It is unclear whether the magnitude of sensor lag is lower during falling glucose than during rising glucose. Initially, we analysed 95 data segments during which glucose changed and during which very frequent reference blood glucose monitoring was performed. However, to minimize confounding effects of noise and calibration error, we excluded data segments in which there was substantial sensor error. After these exclusions, and combination of data from duplicate sensors, there were 72 analysable data segments (36 for rising glucose, 36 for falling). We measured lag in two ways: (1) the time delay at the vertical mid-point of the glucose change (regression delay); and (2) determination of the optimal time shift required to minimize the difference between glucose sensor signals and blood glucose values drawn concurrently. Using the regression delay method, the mean sensor lag for rising vs. falling glucose segments was 8.9 min (95%CI 6.1-11.6) vs. 1.5 min (95%CI -2.6 to 5.5, P<0.005). Using the time shift optimization method, results were similar, with a lag that was higher for rising than for falling segments [8.3 (95%CI 5.8-10.7) vs. 1.5 min (95% CI -2.2 to 5.2), P<0.001]. Commensurate with the lag results, sensor accuracy was greater during falling than during rising glucose segments. In Type 1 diabetes, when noise and calibration error are minimized to reduce effects that confound delay measurement, subcutaneous glucose sensors demonstrate a shorter lag duration and greater accuracy when glucose is falling than when rising. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.

  9. Prevalence of Incidental Clinoid Segment Saccular Aneurysms.

    PubMed

    Revilla-Pacheco, Francisco; Escalante-Seyffert, María Cecilia; Herrada-Pineda, Tenoch; Manrique-Guzman, Salvador; Perez-Zuniga, Irma; Rangel-Suarez, Sergio; Rubalcava-Ortega, Johnatan; Loyo-Varela, Mauro

    2018-04-12

    Clinoid segment aneurysms are cerebral vascular lesions recently described in the neurosurgical literature. They arise from the clinoid segment of the internal carotid artery, which is the segment limited rostrally by the dural carotid ring and caudally, by the carotid-oculomotor membrane. Even although clinoid segment aneurysms represent a common incidental finding in magnetic resonance studies, its prevalence has not been yet reported. To determine the prevalence of incidental clinoid segment saccular aneurysms diagnosed by magnetic resonance imaging as well as their anatomic architecture and their association with smoking, arterial hypertension, age, and sex of patients. A total of 500 patients were prospectively studied with magnetic resonance imaging time-of-flight sequence and angioresonance with contrast material, to search for incidental saccular intracranial aneurysms. The site of primary interest was the clinoid segment, but the presence of aneurysms in any other location was determined for comparison. The relation among the presence of clinoid segment aneurysms, demographic factors, and secondary diagnosis of arterial hypertension, smoking, and other vascular/neoplastic cerebral lesions was analyzed. We found a global prevalence of incidental aneurysms of 7% (95% confidence interval, 5-9), with a prevalence of clinoid segment aneurysms of 3% (95% confidence interval, 2-4). Univariate logistic regression analysis showed a statistically significant relationship among incidental aneurysms, systemic arterial hypertension (P = 0.000), and smoking (P = 0.004). In the studied population, incidental clinoid segment aneurysms constitute the variety with highest prevalence. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  11. 100-m Breaststroke Swimming Performance in Youth Swimmers: The Predictive Value of Anthropometrics.

    PubMed

    Sammoud, Senda; Nevill, Alan Michael; Negra, Yassine; Bouguezzi, Raja; Chaabene, Helmi; Hachana, Younés

    2018-03-16

    This study aimed to estimate the optimal body size, limb segment length, and girth or breadth ratios of 100-m breaststroke performance in youth swimmers. In total, 59 swimmers [male: n = 39, age = 11.5 (1.3) y; female: n = 20, age = 12.0 (1.0) y] participated in this study. To identify size/shape characteristics associated with 100-m breaststroke swimming performance, we computed a multiplicative allometric log-linear regression model, which was refined using backward elimination. Results showed that the 100-m breaststroke performance revealed a significant negative association with fat mass and a significant positive association with the segment length ratio (arm ratio = hand length/forearm length) and limb girth ratio (girth ratio = forearm girth/wrist girth). In addition, leg length, biacromial breadth, and biiliocristal breadth revealed significant positive associations with the 100-m breaststroke performance. However, height and body mass did not contribute to the model, suggesting that the advantage of longer levers was limb-specific rather than a general whole-body advantage. In fact, it is only by adopting multiplicative allometric models that the previously mentioned ratios could have been derived. These results highlighted the importance of considering anthropometric characteristics of youth breaststroke swimmers for talent identification and/or athlete monitoring purposes. In addition, these findings may assist orienting swimmers to the appropriate stroke based on their anthropometric characteristics.

  12. Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure

    NASA Astrophysics Data System (ADS)

    Hendryx, S. M.; Barron-Gafford, G.

    2017-12-01

    Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave one out cross validation, by 40.6% from deterministic mesquite allometry and 35.9% from the inferred ecosystem-state allometric function. Our framework should allow for the inference of biomass more efficiently than common subplot methods and more accurately than individual tree segmentation methods in densely vegetated environments.

  13. Discriminative spatial-frequency-temporal feature extraction and classification of motor imagery EEG: An sparse regression and Weighted Naïve Bayesian Classifier-based approach.

    PubMed

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang

    2017-02-15

    Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. How's the Flu Getting Through? Landscape genetics suggests both humans and birds spread H5N1 in Egypt.

    PubMed

    Young, Sean G; Carrel, Margaret; Kitchen, Andrew; Malanson, George P; Tamerius, James; Ali, Mohamad; Kayali, Ghazi

    2017-04-01

    First introduced to Egypt in 2006, H5N1 highly pathogenic avian influenza has resulted in the death of millions of birds and caused over 350 infections and at least 117 deaths in humans. After a decade of viral circulation, outbreaks continue to occur and diffusion mechanisms between poultry farms remain unclear. Using landscape genetics techniques, we identify the distance models most strongly correlated with the genetic relatedness of the viruses, suggesting the most likely methods of viral diffusion within Egyptian poultry. Using 73 viral genetic sequences obtained from infected birds throughout northern Egypt between 2009 and 2015, we calculated the genetic dissimilarity between H5N1 viruses for all eight gene segments. Spatial correlation was evaluated using Mantel tests and correlograms and multiple regression of distance matrices within causal modeling and relative support frameworks. These tests examine spatial patterns of genetic relatedness, and compare different models of distance. Four models were evaluated: Euclidean distance, road network distance, road network distance via intervening markets, and a least-cost path model designed to approximate wild waterbird travel using niche modeling and circuit theory. Samples from backyard farms were most strongly correlated with least cost path distances. Samples from commercial farms were most strongly correlated with road network distances. Results were largely consistent across gene segments. Results suggest wild birds play an important role in viral diffusion between backyard farms, while commercial farms experience human-mediated diffusion. These results can inform avian influenza surveillance and intervention strategies in Egypt. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Analysis and design of segment control system in segmented primary mirror

    NASA Astrophysics Data System (ADS)

    Yu, Wenhao; Li, Bin; Chen, Mo; Xian, Hao

    2017-10-01

    Segmented primary mirror will be adopted widely in giant telescopes in future, such as TMT, E-ELT and GMT. High-performance control technology of the segmented primary mirror is one of the difficult technologies for telescopes using segmented primary mirror. The control of each segment is the basis of control system in segmented mirror. Correcting the tilt and tip of single segment is the main work of this paper which is divided into two parts. Firstly, harmonic response done in finite element model of single segment matches the Bode diagram of a two-order system whose natural frequency is 45 hertz and damping ratio is 0.005. Secondly, a control system model is established, and speed feedback is introduced in control loop to suppress resonance point gain and increase the open-loop bandwidth, up to 30Hz or even higher. Corresponding controller is designed based on the control system model described above.

  16. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  17. Markov models of genome segmentation

    NASA Astrophysics Data System (ADS)

    Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram

    2007-01-01

    We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.

  18. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  19. Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs

    NASA Astrophysics Data System (ADS)

    Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.

    1997-04-01

    The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.

  20. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  1. Historical Data Analysis of Hospital Discharges Related to the Amerithrax Attack in Florida

    PubMed Central

    Burke, Lauralyn K.; Brown, C. Perry; Johnson, Tammie M.

    2016-01-01

    Interrupted time-series analysis (ITSA) can be used to identify, quantify, and evaluate the magnitude and direction of an event on the basis of time-series data. This study evaluates the impact of the bioterrorist anthrax attacks (“Amerithrax”) on hospital inpatient discharges in the metropolitan statistical area of Palm Beach, Broward, and Miami-Dade counties in the fourth quarter of 2001. Three statistical methods—standardized incidence ratio (SIR), segmented regression, and an autoregressive integrated moving average (ARIMA)—were used to determine whether Amerithrax influenced inpatient utilization. The SIR found a non–statistically significant 2 percent decrease in hospital discharges. Although the segmented regression test found a slight increase in the discharge rate during the fourth quarter, it was also not statistically significant; therefore, it could not be attributed to Amerithrax. Segmented regression diagnostics preparing for ARIMA indicated that the quarterly data time frame was not serially correlated and violated one of the assumptions for the use of the ARIMA method and therefore could not properly evaluate the impact on the time-series data. Lack of data granularity of the time frames hindered the successful evaluation of the impact by the three analytic methods. This study demonstrates that the granularity of the data points is as important as the number of data points in a time series. ITSA is important for the ability to evaluate the impact that any hazard may have on inpatient utilization. Knowledge of hospital utilization patterns during disasters offer healthcare and civic professionals valuable information to plan, respond, mitigate, and evaluate any outcomes stemming from biothreats. PMID:27843420

  2. [Medical image segmentation based on the minimum variation snake model].

    PubMed

    Zhou, Changxiong; Yu, Shenglin

    2007-02-01

    It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.

  3. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  4. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  5. Off- and Along-Axis Slow Spreading Ridge Segment Characters: Insights From 3d Thermal Modeling

    NASA Astrophysics Data System (ADS)

    Gac, S.; Tisseau, C.; Dyment, J.

    2001-12-01

    Many observations along the Mid-Atlantic Ridge segments suggest a correlation between surface characters (length, axial morphology) and the thermal state of the segment. Thibaud et al. (1998) classify segments according to their thermal state: "colder" segments shorter than 30 km show a weak magmatic activity, and "hotter" segments as long as 90 km show a robust magmatic activity. The existence of such a correlation suggests that the thermal structure of a slow spreading ridge segment explains most of the surface observations. Here we test the physical coherence of such an integrated thermal model and evaluate it quantitatively. The different kinds of segment would constitute different phases in a segment evolution, the segment evolving progressively from a "colder" to a "hotter" so to a "colder" state. Here we test the consistency of such an evolution scheme. To test these hypotheses we have developed a 3D numerical model for the thermal structure and evolution of a slow spreading ridge segment. The thermal structure is controlled by the geometry and the dimensions of a permanently hot zone, imposed beneath the segment center, where is simulated the adiabatic ascent of magmatic material. To compare the model with the observations several geophysic quantities which depend on the thermal state are simulated: crustal thickness variations along axis, gravity anomalies (reflecting density variations) and earthquake maximum depth (corresponding to the 750° C isotherm depth). The thermal structure of a particular segment is constrained by comparing the simulated quantities to the real ones. Considering realistic magnetization parameters, the magnetic anomalies generated from the same thermal structure and evolution reproduce the observed magnetic anomaly amplitude variations along the segment. The thermal structures accounting for observations are determined for each kind of segment (from "colder" to "hotter"). The evolution of the thermal structure from the "colder" to the "hotter" segments gives credence to a temporal relationship between the different kinds of segment. The resulting thermal evolution model of slow spreading ridge segments may explain the rhomboedric shapes observed off-axis.

  6. CT-based manual segmentation and evaluation of paranasal sinuses.

    PubMed

    Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G

    2009-04-01

    Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.

  7. TARPARE: a method for selecting target audiences for public health interventions.

    PubMed

    Donovan, R J; Egger, G; Francas, M

    1999-06-01

    This paper presents a model to assist the health promotion practitioner systematically compare and select what might be appropriate target groups when there are a number of segments competing for attention and resources. TARPARE assesses previously identified segments on the following criteria: T: The Total number of persons in the segment; AR: The proportion of At Risk persons in the segment; P: The Persuability of the target audience; A: The Accessibility of the target audience; R: Resources required to meet the needs of the target audience; and E: Equity, social justice considerations. The assessment can be applied qualitatively or can be applied such that scores can be assigned to each segment. Two examples are presented. TARPARE is a useful and flexible model for understanding the various segments in a population of interest and for assessing the potential viability of interventions directed at each segment. The model is particularly useful when there is a need to prioritise segments in terms of available budgets. The model provides a disciplined approach to target selection and forces consideration of what weights should be applied to the different criteria, and how these might vary for different issues or for different objectives. TARPARE also assesses segments in terms of an overall likelihood of optimal impact for each segment. Targeting high scoring segments is likely to lead to greater program success than targeting low scoring segments.

  8. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  9. The X-Factor: an evaluation of common methods used to analyse major inter-segment kinematics during the golf swing.

    PubMed

    Brown, Susan J; Selbie, W Scott; Wallace, Eric S

    2013-01-01

    A common biomechanical feature of a golf swing, described in various ways in the literature, is the interaction between the thorax and pelvis, often termed the X-Factor. There is no consistent method used within golf biomechanics literature however to calculate these segment interactions. The purpose of this study was to examine X-factor data calculated using three reported methods in order to determine the similarity or otherwise of the data calculated using each method. A twelve-camera three-dimensional motion capture system was used to capture the driver swings of 19 participants and a subject specific three-dimensional biomechanical model was created with the position and orientation of each model estimated using a global optimisation algorithm. Comparison of the X-Factor methods showed significant differences for events during the swing (P < 0.05). Data for each kinematic measure were derived as a times series for all three methods and regression analysis of these data showed that whilst one method could be successfully mapped to another, the mappings between methods are subject dependent (P <0.05). Findings suggest that a consistent methodology considering the X-Factor from a joint angle approach is most insightful in describing a golf swing.

  10. Adiposity has no direct effect on carotid intima-media thickness in adolescents and young adults: Use of structural equation modeling to elucidate indirect & direct pathways.

    PubMed

    Gao, Zhiqian; Khoury, Philip R; McCoy, Connie E; Shah, Amy S; Kimball, Thomas R; Dolan, Lawrence M; Urbina, Elaine M

    2016-03-01

    Carotid intima-media thickness (cIMT) is associated with CV events in adults. Thicker cIMT is found in youth with CV risk factors including obesity. Which risk factors have the most effect upon cIMT in youth and whether obesity has direct or indirect effects is not known. We used structural equation modeling to elucidate direct and indirect pathways through which obesity and other risk factors were associated with cIMT. We collected demographics, anthropometrics and laboratory data on 784 subjects age 10-24 years (mean 18.0 ± 3.3 years). Common, bulb and internal carotid cIMT were measured by ultrasound. Multivariable regression analysis was performed to assess independent determinants of cIMT. Analyses were repeated with structural equation modeling to determine direct and indirect effects. Multivariable regression models explained 11%-22% of variation of cIMT. Age, sex and systolic blood pressure (BP) z-score were significant determinants of all cIMT segments. Body mass index (BMI) z-score, race, presence of type 2 diabetes mellitus (T2DM), hemoglobin A1c (HbA1c) and non-HDL were significant for some segments (all p = 0.05). The largest direct effect on cIMT was age (0.312) followed by BP (0.228), Blood glucose control (0.108) and non-HDL (0.134). BMI only had a significant indirect effect through blood glucose control, BP & non-HDL. High sensitivity C-reactive protein (CRP) had a small indirect effect through blood glucose control (all p = 0.05). Age and BP are the major factors with direct effect on cIMT. Glucose and non-HDL were also important in this cohort with a high prevalence of T2DM. BMI only has indirect effects, through other risk factors. Traditional CV risk factors have important direct effects on cIMT in the young, but adiposity exerts its influence only through other CV risk factors. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Stress distribution pattern of screw-retained restorations with segmented vs. non-segmented abutments: A finite element analysis

    PubMed Central

    Aalaei, Shima; Rajabi Naraki, Zahra; Nematollahi, Fatemeh; Beyabanaki, Elaheh; Shahrokhi Rad, Afsaneh

    2017-01-01

    Background. Screw-retained restorations are favored in some clinical situations such as limited inter-occlusal spaces. This study was designed to compare stresses developed in the peri-implant bone in two different types of screw-retained restorations (segmented vs. non-segmented abutment) using a finite element model. Methods. An implant, 4.1 mm in diameter and 10 mm in length, was placed in the first molar site of a mandibular model with 1 mm of cortical bone on the buccal and lingual sides. Segmented and non-segmented screw abutments with their crowns were placed on the simulated implant in each model. After loading (100 N, axial and 45° non-axial), von Mises stress was recorded using ANSYS software, version 12.0.1. Results. The maximum stresses in the non-segmented abutment screw were less than those of segmented abutment (87 vs. 100, and 375 vs. 430 MPa under axial and non-axial loading, respectively). The maximum stresses in the peri-implant bone for the model with segmented abutment were less than those of non-segmented ones (21 vs. 24 MPa, and 31 vs. 126 MPa under vertical and angular loading, respectively). In addition, the micro-strain of peri-implant bone for the segmented abutment restoration was less than that of non-segmented abutment. Conclusion. Under axial and non-axial loadings, non-segmented abutment showed less stress concentration in the screw, while there was less stress and strain in the peri-implant bone in the segmented abutment. PMID:29184629

  12. Deformable M-Reps for 3D Medical Image Segmentation.

    PubMed

    Pizer, Stephen M; Fletcher, P Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z; Fridman, Yonatan; Fritsch, Daniel S; Gash, Graham; Glotzer, John M; Jiroutek, Michael R; Lu, Conglin; Muller, Keith E; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L

    2003-11-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models , which define objects at coarse scale by a hierarchy of figures - each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps ), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.

  13. Deformable M-Reps for 3D Medical Image Segmentation

    PubMed Central

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID:23825898

  14. Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.

    PubMed

    Pehkonen, Petri; Wong, Garry; Törönen, Petri

    2010-01-01

    Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.

  15. Segmenting the thoracic, abdominal and pelvic musculature on CT scans combining atlas-based model and active contour model

    NASA Astrophysics Data System (ADS)

    Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.

    2013-03-01

    Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.

  16. [RS estimation of inventory parameters and carbon storage of moso bamboo forest based on synergistic use of object-based image analysis and decision tree].

    PubMed

    Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie

    2017-10-01

    By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.

  17. Liver volume measurement: reason of the difference between in vivo CT-volumetry and intraoperative ex vivo determination and how to cope it.

    PubMed

    Niehues, Stefan M; Unger, J K; Malinowski, M; Neymeyer, J; Hamm, B; Stockmann, M

    2010-08-20

    Volumetric assessment of the liver regularly yields discrepant results between pre- and intraoperatively determined volumes. Nevertheless, the main factor responsible for this discrepancy remains still unclear. The aim of this study was to systematically determine the difference between in vivo CT-volumetry and ex vivo volumetry in a pig animal model. Eleven pigs were studied. Liver density assessment, CT-volumetry and water displacement volumetry was performed after surgical removal of the complete liver. Known possible errors of volume determination like resection or segmentation borders were eliminated in this model. Regression analysis was performed and differences between CT-volumetry and water displacement determined. Median liver density was 1.07g/ml. Regression analysis showed a high correlation of r(2) = 0.985 between CT-volumetry and water displacement. CT-volumetry was found to be 13% higher than water displacement volumetry (p<0.0001). In this study the only relevant factor leading to the difference between in vivo CT-volumetry and ex vivo water displacement volumetry seems to be blood perfusion of the liver. The systematic difference of 13 percent has to be taken in account when dealing with those measures.

  18. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    PubMed

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  19. Active appearance model and deep learning for more accurate prostate segmentation on MRI

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.

    2016-03-01

    Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.

  20. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  1. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  2. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  3. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  4. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  5. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    PubMed

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Segmenting words from natural speech: subsegmental variation in segmental cues.

    PubMed

    Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric

    2010-06-01

    Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.

  7. Deep convolutional neural network for prostate MR segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei

    2017-03-01

    Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.

  8. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  9. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  10. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    NASA Astrophysics Data System (ADS)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  11. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Coordination of Fictive Motor Activity in the Larval Zebrafish Is Generated by Non-Segmental Mechanisms

    PubMed Central

    Wiggin, Timothy D.; Peck, Jack H.; Masino, Mark A.

    2014-01-01

    The cellular and network basis for most vertebrate locomotor central pattern generators (CPGs) is incompletely characterized, but organizational models based on known CPG architectures have been proposed. Segmental models propose that each spinal segment contains a circuit that controls local coordination and sends longer projections to coordinate activity between segments. Unsegmented/continuous models propose that patterned motor output is driven by gradients of neurons and synapses that do not have segmental boundaries. We tested these ideas in the larval zebrafish, an animal that swims in discrete episodes, each of which is composed of coordinated motor bursts that progress rostrocaudally and alternate from side to side. We perturbed the spinal cord using spinal transections or strychnine application and measured the effect on fictive motor output. Spinal transections eliminated episode structure, and reduced both rostrocaudal and side-to-side coordination. Preparations with fewer intact segments were more severely affected, and preparations consisting of midbody and caudal segments were more severely affected than those consisting of rostral segments. In reduced preparations with the same number of intact spinal segments, side-to-side coordination was more severely disrupted than rostrocaudal coordination. Reducing glycine receptor signaling with strychnine reversibly disrupted both rostrocaudal and side-to-side coordination in spinalized larvae without disrupting episodic structure. Both spinal transection and strychnine decreased the stability of the motor rhythm, but this effect was not causal in reducing coordination. These results are inconsistent with a segmented model of the spinal cord and are better explained by a continuous model in which motor neuron coordination is controlled by segment-spanning microcircuits. PMID:25275377

  13. A new medical image segmentation model based on fractional order differentiation and level set

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Huang, Shan; Xie, Feifei; Li, Lihong; Chen, Wensheng; Liang, Zhengrong

    2018-03-01

    Segmenting medical images is still a challenging task for both traditional local and global methods because the image intensity inhomogeneous. In this paper, two contributions are made: (i) on the one hand, a new hybrid model is proposed for medical image segmentation, which is built based on fractional order differentiation, level set description and curve evolution; and (ii) on the other hand, three popular definitions of Fourier-domain, Grünwald-Letnikov (G-L) and Riemann-Liouville (R-L) fractional order differentiation are investigated and compared through experimental results. Because of the merits of enhancing high frequency features of images and preserving low frequency features of images in a nonlinear manner by the fractional order differentiation definitions, one fractional order differentiation definition is used in our hybrid model to perform segmentation of inhomogeneous images. The proposed hybrid model also integrates fractional order differentiation, fractional order gradient magnitude and difference image information. The widely-used dice similarity coefficient metric is employed to evaluate quantitatively the segmentation results. Firstly, experimental results demonstrated that a slight difference exists among the three expressions of Fourier-domain, G-L, RL fractional order differentiation. This outcome supports our selection of one of the three definitions in our hybrid model. Secondly, further experiments were performed for comparison between our hybrid segmentation model and other existing segmentation models. A noticeable gain was seen by our hybrid model in segmenting intensity inhomogeneous images.

  14. Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks

    DTIC Science & Technology

    2016-08-11

    Segment-Fixed Priority Scheduling for Self-Suspending Real -Time Tasks Junsung Kim, Department of Electrical and Computer Engineering, Carnegie...4 2.1 Application of a Multi-Segment Self-Suspending Real -Time Task Model ............................. 5 3 Fixed Priority Scheduling...1 Figure 2: A multi-segment self-suspending real -time task model

  15. Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178

  16. Automatic localization of bifurcations and vessel crossings in digital fundus photographs using location regression

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Dumitrescu, Alina V.; van Ginneken, Bram; Abrámoff, Michael D.

    2011-03-01

    Parameters extracted from the vasculature on the retina are correlated with various conditions such as diabetic retinopathy and cardiovascular diseases such as stroke. Segmentation of the vasculature on the retina has been a topic that has received much attention in the literature over the past decade. Analysis of the segmentation result, however, has only received limited attention with most works describing methods to accurately measure the width of the vessels. Analyzing the connectedness of the vascular network is an important step towards the characterization of the complete vascular tree. The retinal vascular tree, from an image interpretation point of view, originates at the optic disc and spreads out over the retina. The tree bifurcates and the vessels also cross each other. The points where this happens form the key to determining the connectedness of the complete tree. We present a supervised method to detect the bifurcations and crossing points of the vasculature of the retina. The method uses features extracted from the vasculature as well as the image in a location regression approach to find those locations of the segmented vascular tree where the bifurcation or crossing occurs (from here, POI, points of interest). We evaluate the method on the publicly available DRIVE database in which an ophthalmologist has marked the POI.

  17. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less

  18. Examining Differential Resilience Mechanisms by Comparing 'Tipping Points' of the Effects of Neighborhood Conditions on Anxiety by Race/Ethnicity.

    PubMed

    Coman, Emil Nicolae; Wu, Helen Zhao

    2018-02-20

    Exposure to adverse environmental and social conditions affects physical and mental health through complex mechanisms. Different racial/ethnic (R/E) groups may be more or less vulnerable to the same conditions, and the resilience mechanisms that can protect them likely operate differently in each population. We investigate how adverse neighborhood conditions (neighborhood disorder, NDis) differentially impact mental health (anxiety, Anx) in a sample of white and Black (African American) young women from Southeast Texas, USA. We illustrate a simple yet underutilized segmented regression model where linearity is relaxed to allow for a shift in the strength of the effect with the levels of the predictor. We compare how these effects change within R/E groups with the level of the predictor, but also how the "tipping points," where the effects change in strength, may differ by R/E. We find with classic linear regression that neighborhood disorder adversely affects Black women's anxiety, while in white women the effect seems negligible. Segmented regressions show that the Ndis → Anx effects in both groups of women appear to shift at similar levels, about one-fifth of a standard deviation below the mean of NDis, but the effect for Black women appears to start out as negative, then shifts in sign, i.e., to increase anxiety, while for white women, the opposite pattern emerges. Our findings can aid in devising better strategies for reducing health disparities that take into account different coping or resilience mechanisms operating differentially at distinct levels of adversity. We recommend that researchers investigate when adversity becomes exceedingly harmful and whether this happens differentially in distinct populations, so that intervention policies can be planned to reverse conditions that are more amenable to change, in effect pushing back the overall social risk factors below such tipping points.

  19. Interactive 3D segmentation using connected orthogonal contours.

    PubMed

    de Bruin, P W; Dercksen, V J; Post, F H; Vossepoel, A M; Streekstra, G J; Vos, F M

    2005-05-01

    This paper describes a new method for interactive segmentation that is based on cross-sectional design and 3D modelling. The method represents a 3D model by a set of connected contours that are planar and orthogonal. Planar contours overlayed on image data are easily manipulated and linked contours reduce the amount of user interaction.1 This method solves the contour-to-contour correspondence problem and can capture extrema of objects in a more flexible way than manual segmentation of a stack of 2D images. The resulting 3D model is guaranteed to be free of geometric and topological errors. We show that manual segmentation using connected orthogonal contours has great advantages over conventional manual segmentation. Furthermore, the method provides effective feedback and control for creating an initial model for, and control and steering of, (semi-)automatic segmentation methods.

  20. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    PubMed

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  1. Child Schooling in Ethiopia: The Role of Maternal Autonomy.

    PubMed

    Gebremedhin, Tesfaye Alemayehu; Mohanty, Itismita

    2016-01-01

    This paper examines the effects of maternal autonomy on child schooling outcomes in Ethiopia using a nationally representative Ethiopian Demographic and Health survey for 2011. The empirical strategy uses a Hurdle Negative Binomial Regression model to estimate years of schooling. An ordered probit model is also estimated to examine age grade distortion using a trichotomous dependent variable that captures three states of child schooling. The large sample size and the range of questions available in this dataset allow us to explore the influence of individual and household level social, economic and cultural factors on child schooling. The analysis finds statistically significant effects of maternal autonomy variables on child schooling in Ethiopia. The roles of maternal autonomy and other household-level factors on child schooling are important issues in Ethiopia, where health and education outcomes are poor for large segments of the population.

  2. A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation

    PubMed Central

    Zhang, Rui; Zhu, Shiping; Zhou, Qin

    2016-01-01

    Infrared image segmentation is a challenging topic because infrared images are characterized by high noise, low contrast, and weak edges. Active contour models, especially gradient vector flow, have several advantages in terms of infrared image segmentation. However, the GVF (Gradient Vector Flow) model also has some drawbacks including a dilemma between noise smoothing and weak edge protection, which decrease the effect of infrared image segmentation significantly. In order to solve this problem, we propose a novel generalized gradient vector flow snakes model combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. We also adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. Experimental results and comparisons against other methods indicate that our proposed snakes model owns better ability in terms of infrared image segmentation than other snakes models. PMID:27775660

  3. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  4. Do Three-dimensional Visualization and Three-dimensional Printing Improve Hepatic Segment Anatomy Teaching? A Randomized Controlled Study.

    PubMed

    Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Li, Jianyi; Huang, Wenhua

    2016-01-01

    Hepatic segment anatomy is difficult for medical students to learn. Three-dimensional visualization (3DV) is a useful tool in anatomy teaching, but current models do not capture haptic qualities. However, three-dimensional printing (3DP) can produce highly accurate complex physical models. Therefore, in this study we aimed to develop a novel 3DP hepatic segment model and compare the teaching effectiveness of a 3DV model, a 3DP model, and a traditional anatomical atlas. A healthy candidate (female, 50-years old) was recruited and scanned with computed tomography. After three-dimensional (3D) reconstruction, the computed 3D images of the hepatic structures were obtained. The parenchyma model was divided into 8 hepatic segments to produce the 3DV hepatic segment model. The computed 3DP model was designed by removing the surrounding parenchyma and leaving the segmental partitions. Then, 6 experts evaluated the 3DV and 3DP models using a 5-point Likert scale. A randomized controlled trial was conducted to evaluate the educational effectiveness of these models compared with that of the traditional anatomical atlas. The 3DP model successfully displayed the hepatic segment structures with partitions. All experts agreed or strongly agreed that the 3D models provided good realism for anatomical instruction, with no significant differences between the 3DV and 3DP models in each index (p > 0.05). Additionally, the teaching effects show that the 3DV and 3DP models were significantly better than traditional anatomical atlas in the first and second examinations (p < 0.05). Between the first and second examinations, only the traditional method group had significant declines (p < 0.05). A novel 3DP hepatic segment model was successfully developed. Both the 3DV and 3DP models could improve anatomy teaching significantly. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  5. Left atrial appendage segmentation and quantitative assisted diagnosis of atrial fibrillation based on fusion of temporal-spatial information.

    PubMed

    Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie

    2018-05-01

    In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Single spot albumin to creatinine ratio: A simple marker of long-term prognosis in non-ST segment elevation acute coronary syndromes.

    PubMed

    Higa, Claudio Cesar; Novo, Fedor Anton; Nogues, Ignacio; Ciambrone, Maria Graciana; Donato, Maria Sol; Gambarte, Maria Jimena; Rizzo, Natalia; Catalano, Maria Paula; Korolov, Eugenio; Comignani, Pablo Dino

    2016-01-01

    Microalbuminuria is a known risk factor for cardiovascular morbidity and mortality suggesting that it should be a marker of endothelial dysfunction. Albumin to creatinine ratio (ACR) is an available and rapid test for microalbuminuria determination, with a high correlation with the 24-h urine collection method. There is no prospective study that evaluates the prognostic value of ACR in patients with non ST-segment elevation acute coronary syndromes (NSTE-ACS). The purpose of our study was to detect the long-term prognostic value of ACR in patients with NSTE-ACS. Albumin to creatinine ratio was estimated in 700 patients with NSTE-ACS at admission. Median follow-up time was 18 months. The best cutoff point of ACR for death or acute myocardial infarction was 20 mg/g. Twenty-two percent of patients had elevated ACR. By multivariable Cox regression analysis, ACR was an independent predictor of the clinical endpoint: odds ratio 5.8 (95% confidence interval [CI] 2-16), log-rank 2 p < 0.0001 in a model including age > 65 years, female gender, diabetes mellitus, creatinine clearance, glucose levels at admission, elevated cardiac markers (troponin T/CK-MB) and ST segment depression. The addition of ACR significantly improved GRACE score C-statistics from 0.69 (95% CI 0.59-0.83) to 0.77 (95% CI 0.65-0.88), SE 0.04, 2 p = 0.03, with a good calibration with both models. Albumin to creatinine ratio is an independent and accessible predictor of long-term adverse outcomes in NSTE-ACS, providing additional value for risk stratification.

  7. Three-dimensional lung tumor segmentation from x-ray computed tomography using sparse field active models.

    PubMed

    Awad, Joseph; Owrangi, Amir; Villemaire, Lauren; O'Riordan, Elaine; Parraga, Grace; Fenster, Aaron

    2012-02-01

    Manual segmentation of lung tumors is observer dependent and time-consuming but an important component of radiology and radiation oncology workflow. The objective of this study was to generate an automated lung tumor measurement tool for segmentation of pulmonary metastatic tumors from x-ray computed tomography (CT) images to improve reproducibility and decrease the time required to segment tumor boundaries. The authors developed an automated lung tumor segmentation algorithm for volumetric image analysis of chest CT images using shape constrained Otsu multithresholding (SCOMT) and sparse field active surface (SFAS) algorithms. The observer was required to select the tumor center and the SCOMT algorithm subsequently created an initial surface that was deformed using level set SFAS to minimize the total energy consisting of mean separation, edge, partial volume, rolling, distribution, background, shape, volume, smoothness, and curvature energies. The proposed segmentation algorithm was compared to manual segmentation whereby 21 tumors were evaluated using one-dimensional (1D) response evaluation criteria in solid tumors (RECIST), two-dimensional (2D) World Health Organization (WHO), and 3D volume measurements. Linear regression goodness-of-fit measures (r(2) = 0.63, p < 0.0001; r(2) = 0.87, p < 0.0001; and r(2) = 0.96, p < 0.0001), and Pearson correlation coefficients (r = 0.79, p < 0.0001; r = 0.93, p < 0.0001; and r = 0.98, p < 0.0001) for 1D, 2D, and 3D measurements, respectively, showed significant correlations between manual and algorithm results. Intra-observer intraclass correlation coefficients (ICC) demonstrated high reproducibility for algorithm (0.989-0.995, 0.996-0.997, and 0.999-0.999) and manual measurements (0.975-0.993, 0.985-0.993, and 0.980-0.992) for 1D, 2D, and 3D measurements, respectively. The intra-observer coefficient of variation (CV%) was low for algorithm (3.09%-4.67%, 4.85%-5.84%, and 5.65%-5.88%) and manual observers (4.20%-6.61%, 8.14%-9.57%, and 14.57%-21.61%) for 1D, 2D, and 3D measurements, respectively. The authors developed an automated segmentation algorithm requiring only that the operator select the tumor to measure pulmonary metastatic tumors in 1D, 2D, and 3D. Algorithm and manual measurements were significantly correlated. Since the algorithm segmentation involves selection of a single seed point, it resulted in reduced intra-observer variability and decreased time, for making the measurements.

  8. Three-dimensional reconstruction of coronary arteries and its application in localization of coronary artery segments corresponding to myocardial segments identified by transthoracic echocardiography.

    PubMed

    Zhong, Chunyan; Guo, Yanli; Huang, Haiyun; Tan, Liwen; Wu, Yi; Wang, Wenting

    2013-01-01

    To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE). Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG). 3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P < 0.05) and in the mild-to-moderate stenosis group (P < 0.05). These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD.

  9. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  10. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  11. The Potential for Elimination of Racial-Ethnic Disparities in HIV Treatment Initiation in the Medicaid Population among 14 Southern States

    PubMed Central

    Zhang, Shun; McGoy, Shanell L.; Dawes, Daniel; Fransua, Mesfin; Rust, George; Satcher, David

    2014-01-01

    Objectives The purpose of this study was to explore the racial and ethnic disparities in initiation of antiretroviral treatment (ARV treatment or ART) among HIV-infected Medicaid enrollees 18–64 years of age in 14 southern states which have high prevalence of HIV/AIDS and high racial disparities in HIV treatment access and mortality. Methods We used Medicaid claims data from 2005 to 2007 for a retrospective cohort study. We compared frequency variances of HIV treatment uptake among persons of different racial- ethnic groups using univariate and multivariate methods. The unadjusted odds ratio was estimated through multinomial logistic regression. The multinomial logistic regression model was repeated with adjustment for multiple covariates. Results Of the 23,801 Medicaid enrollees who met criteria for initiation of ARV treatment, only one third (34.6%) received ART consistent with national guideline treatment protocols, and 21.5% received some ARV medication, but with sub-optimal treatment profiles. There was no significant difference in the proportion of people who received ARV treatment between black (35.8%) and non-Hispanic whites (35.7%), but Hispanic/Latino persons (26%) were significantly less likely to receive ARV treatment. Conclusions Overall ARV treatment levels for all segments of the population are less than optimal. Among the Medicaid population there are no racial HIV treatment disparities between Black and White persons living with HIV, which suggests the potential relevance of Medicaid to currently uninsured populations, and the potential to achieve similar levels of equality within Medicaid for Hispanic/Latino enrollees and other segments of the Medicaid population. PMID:24769625

  12. Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans.

    PubMed

    Reda, Fitsum A; Noble, Jack H; Rivas, Alejandro; McRackan, Theodore R; Labadie, Robert F; Dawant, Benoit M

    2011-10-01

    Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.

  13. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    PubMed

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.

  14. The Relationship Between Surface Curvature and Abdominal Aortic Aneurysm Wall Stress.

    PubMed

    de Galarreta, Sergio Ruiz; Cazón, Aitor; Antón, Raúl; Finol, Ender A

    2017-08-01

    The maximum diameter (MD) criterion is the most important factor when predicting risk of rupture of abdominal aortic aneurysms (AAAs). An elevated wall stress has also been linked to a high risk of aneurysm rupture, yet is an uncommon clinical practice to compute AAA wall stress. The purpose of this study is to assess whether other characteristics of the AAA geometry are statistically correlated with wall stress. Using in-house segmentation and meshing algorithms, 30 patient-specific AAA models were generated for finite element analysis (FEA). These models were subsequently used to estimate wall stress and maximum diameter and to evaluate the spatial distributions of wall thickness, cross-sectional diameter, mean curvature, and Gaussian curvature. Data analysis consisted of statistical correlations of the aforementioned geometry metrics with wall stress for the 30 AAA inner and outer wall surfaces. In addition, a linear regression analysis was performed with all the AAA wall surfaces to quantify the relationship of the geometric indices with wall stress. These analyses indicated that while all the geometry metrics have statistically significant correlations with wall stress, the local mean curvature (LMC) exhibits the highest average Pearson's correlation coefficient for both inner and outer wall surfaces. The linear regression analysis revealed coefficients of determination for the outer and inner wall surfaces of 0.712 and 0.516, respectively, with LMC having the largest effect on the linear regression equation with wall stress. This work underscores the importance of evaluating AAA mean wall curvature as a potential surrogate for wall stress.

  15. Sex Differences in Timeliness of Reperfusion in Young Patients With ST-Segment-Elevation Myocardial Infarction by Initial Electrocardiographic Characteristics.

    PubMed

    Gupta, Aakriti; Barrabes, Jose A; Strait, Kelly; Bueno, Hector; Porta-Sánchez, Andreu; Acosta-Vélez, J Gabriel; Lidón, Rosa-Maria; Spatz, Erica; Geda, Mary; Dreyer, Rachel P; Lorenze, Nancy; Lichtman, Judith; D'Onofrio, Gail; Krumholz, Harlan M

    2018-03-07

    Young women with ST-segment-elevation myocardial infarction experience reperfusion delays more frequently than men. Our aim was to determine the electrocardiographic correlates of delay in reperfusion in young patients with ST-segment-elevation myocardial infarction. We examined sex differences in initial electrocardiographic characteristics among 1359 patients with ST-segment-elevation myocardial infarction in a prospective, observational, cohort study (2008-2012) of 3501 patients with acute myocardial infarction, 18 to 55 years of age, as part of the VIRGO (Variation in Recovery: Role of Gender on Outcomes of Young AMI Patients) study at 103 US and 24 Spanish hospitals enrolling in a 2:1 ratio for women/men. We created a multivariable logistic regression model to assess the relationship between reperfusion delay (door-to-balloon time >90 or >120 minutes for transfer or door-to-needle time >30 minutes) and electrocardiographic characteristics, adjusting for sex, sociodemographic characteristics, and clinical characteristics at presentation. In our study (834 women and 525 men), women were more likely to exceed reperfusion time guidelines than men (42.4% versus 31.5%; P <0.01). In multivariable analyses, female sex persisted as an important factor in exceeding reperfusion guidelines after adjusting for electrocardiographic characteristics (odds ratio, 1.57; 95% CI, 1.15-2.15). Positive voltage criteria for left ventricular hypertrophy and absence of a prehospital ECG were positive predictors of reperfusion delay; and ST elevation in lateral leads was an inverse predictor of reperfusion delay. Sex disparities in timeliness to reperfusion in young patients with ST-segment-elevation myocardial infarction persisted, despite adjusting for initial electrocardiographic characteristics. Left ventricular hypertrophy by voltage criteria and absence of prehospital ECG are strongly positively correlated and ST elevation in lateral leads is negatively correlated with reperfusion delay. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  16. Prognostic value of myocardial ischemia and necrosis in depressed left ventricular function: a multicenter stress cardiac magnetic resonance registry.

    PubMed

    Husser, Oliver; Monmeneu, Jose V; Bonanad, Clara; Lopez-Lereu, Maria P; Nuñez, Julio; Bosch, Maria J; Garcia, Carlos; Sanchis, Juan; Chorro, Francisco J; Bodi, Vicente

    2014-09-01

    The incremental prognostic value of inducible myocardial ischemia over necrosis derived by stress cardiac magnetic resonance in depressed left ventricular function is unknown. We determined the prognostic value of necrosis and ischemia in patients with depressed left ventricular function referred for dipyridamole stress perfusion magnetic resonance. In a multicenter registry using stress magnetic resonance, the presence (≥ 2 segments) of late enhancement and perfusion defects and their association with major events (cardiac death and nonfatal infarction) was determined. In 391 patients, perfusion defect or late enhancement were present in 224 (57%) and 237 (61%). During follow-up (median, 96 weeks), 47 major events (12%) occurred: 25 cardiac deaths and 22 myocardial infarctions. Patients with major events displayed a larger extent of perfusion defects (6 segments vs 3 segments; P <.001) but not late enhancement (5 segments vs 3 segments; P =.1). Major event rate was significantly higher in the presence of perfusion defects (17% vs 5%; P =.0005) but not of late enhancement (14% vs 9%; P =.1). Patients were categorized into 4 groups: absence of perfusion defect and absence of late enhancement (n = 124), presence of late enhancement and absence of perfusion defect (n = 43), presence of perfusion defect and presence of late enhancement (n = 195), absence of late enhancement and presence of perfusion defect (n = 29). Event rate was 5%, 7%, 16%, and 24%, respectively (P for trend = .003). In a multivariate regression model, only perfusion defect (hazard ratio = 2.86; 95% confidence interval, 1.37-5.95]; P = .002) but not late enhancement (hazard ratio = 1.70; 95% confidence interval, 0.90-3.22; P =.105) predicted events. In depressed left ventricular function, the presence of inducible ischemia is the strongest predictor of major events. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  17. 123I-IPPA SPECT for the prediction of enhanced left ventricular function after coronary bypass graft surgery. Multicenter IPPA Viability Trial Investigators. 123I-iodophenylpentadecanoic acid.

    PubMed

    Verani, M S; Taillefer, R; Iskandrian, A E; Mahmarian, J J; He, Z X; Orlandi, C

    2000-08-01

    Fatty acids are the prime metabolic substrate for myocardial energy production. Hence, fatty acid imaging may be useful in the assessment of myocardial hibernation. The goal of this prospective, multicenter trial was to assess the use of a fatty acid, 123I-iodophenylpentadecanoic acid (IPPA), to identify viable, hibernating myocardium. Patients (n = 119) with abnormal left ventricular wall motion and a left ventricular ejection fraction (LVEF) < 40% who were already scheduled to undergo coronary artery bypass grafting (CABG) underwent IPPA tomography (rest and 30-min redistribution) and blood-pool radionuclide angiography within 3 d of the scheduled operation. Radionuclide angiography was repeated 6-8 wk after CABG. The study endpoint was a > or =10% increase in LVEF after CABG. The number of IPPA-viable abnormally contracting segments necessary to predict a positive LVEF outcome was determined by receiver operating characteristic (ROC) curves and was included in a logistic regression analysis, together with selected clinical variables. Before CABG, abnormal IPPA tomography findings were seen in 113 of 119 patients (95%), of whom 71 (60%) had redistribution in the 30-min images. The LVEF increased modestly after CABG (from 32% +/- 12% to 36% +/- 8%, P< 0.001).A > or =10% increase in LVEF after CABG occurred in 27 of 119 patients (23%). By ROC curves, the best predictor of a > or =10% increase in LVEF was the presence of > or =7 IPPA-viable segments (accuracy, 72%; confidence interval, 64%-80%). Among clinical and scintigraphic variables, the single most important predictor also was the number of IPPA-viable segments (P = 0.008). The number of IPPA-viable segments added significant incremental value to the best clinical predictor model. Asubstantial increase in LVEF occurs after CABG in only a minority of patients (23%) with depressed preoperative function. The number of IPPA-viable segments is useful in predicting a clinically meaningful increase in LVEF.

  18. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  20. Outcomes of an intervention to improve hospital antibiotic prescribing: interrupted time series with segmented regression analysis.

    PubMed

    Ansari, Faranak; Gray, Kirsteen; Nathwani, Dilip; Phillips, Gabby; Ogston, Simon; Ramsay, Craig; Davey, Peter

    2003-11-01

    To evaluate an intervention to reduce inappropriate use of key antibiotics with interrupted time series analysis. The intervention is a policy for appropriate use of Alert Antibiotics (carbapenems, glycopeptides, amphotericin, ciprofloxacin, linezolid, piperacillin-tazobactam and third-generation cephalosporins) implemented through concurrent, patient-specific feedback by clinical pharmacists. Statistical significance and effect size were calculated by segmented regression analysis of interrupted time series of drug use and cost for 2 years before and after the intervention started. Use of Alert Antibiotics increased before the intervention started but decreased steadily for 2 years thereafter. The changes in slope of the time series were 0.27 defined daily doses/100 bed-days per month (95% CI 0.19-0.34) and pound 1908 per month (95% CI pound 1238- pound 2578). The cost of development, dissemination and implementation of the intervention ( pound 20133) was well below the most conservative estimate of the reduction in cost ( pound 133296), which is the lower 95% CI of effect size assuming that cost would not have continued to increase without the intervention. However, if use had continued to increase, the difference between predicted and actual cost of Alert Antibiotics was pound 572448 (95% CI pound 435696- pound 709176) over the 24 months after the intervention started. Segmented regression analysis of pharmacy stock data is a simple, practical and robust method for measuring the impact of interventions to change prescribing. The Alert Antibiotic Monitoring intervention was associated with significant decreases in total use and cost in the 2 years after the programme was implemented. In our hospital, the value of the data far exceeded the cost of processing and analysis.

  1. NiftyNet: a deep-learning platform for medical imaging.

    PubMed

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  2. LINKING LUNG AIRWAY STRUCTURE TO PULMONARY FUNCTION VIA COMPOSITE BRIDGE REGRESSION

    PubMed Central

    Chen, Kun; Hoffman, Eric A.; Seetharaman, Indu; Jiao, Feiran; Lin, Ching-Long; Chan, Kung-Sik

    2017-01-01

    The human lung airway is a complex inverted tree-like structure. Detailed airway measurements can be extracted from MDCT-scanned lung images, such as segmental wall thickness, airway diameter, parent-child branch angles, etc. The wealth of lung airway data provides a unique opportunity for advancing our understanding of the fundamental structure-function relationships within the lung. An important problem is to construct and identify important lung airway features in normal subjects and connect these to standardized pulmonary function test results such as FEV1%. Among other things, the problem is complicated by the fact that a particular airway feature may be an important (relevant) predictor only when it pertains to segments of certain generations. Thus, the key is an efficient, consistent method for simultaneously conducting group selection (lung airway feature types) and within-group variable selection (airway generations), i.e., bi-level selection. Here we streamline a comprehensive procedure to process the lung airway data via imputation, normalization, transformation and groupwise principal component analysis, and then adopt a new composite penalized regression approach for conducting bi-level feature selection. As a prototype of composite penalization, the proposed composite bridge regression method is shown to admit an efficient algorithm, enjoy bi-level oracle properties, and outperform several existing methods. We analyze the MDCT lung image data from a cohort of 132 subjects with normal lung function. Our results show that, lung function in terms of FEV1% is promoted by having a less dense and more homogeneous lung comprising an airway whose segments enjoy more heterogeneity in wall thicknesses, larger mean diameters, lumen areas and branch angles. These data hold the potential of defining more accurately the “normal” subject population with borderline atypical lung functions that are clearly influenced by many genetic and environmental factors. PMID:28280520

  3. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    PubMed

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  5. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  6. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  7. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  8. A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

    PubMed Central

    Wang, Changjian; Liu, Xiaohui; Jin, Shiyao

    2018-01-01

    Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227

  9. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    NASA Astrophysics Data System (ADS)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  10. Impact of touring, performance schedule, and definitions on 1-year injury rates in a modern dance company.

    PubMed

    Bronner, Shaw; Wood, Lily

    2017-11-01

    There is ongoing debate about how to define injury in dance: the most encompassing one or a time-loss definition. We examined the relationship between touring, performance schedule and injury definition on injury rates in a professional modern dance company over one-year. In-house healthcare management tracked 35 dancers for work-related musculoskeletal injuries (WMSI), time-loss injuries (TLinj), complaints, and exposure. The year was divided into 6 segments to allow comparison of effects of performance, rehearsal, and touring. Injuries/segment were converted into injuries/1000-h dance exposure. We conducted negative binomial regression analysis to determine differences between segments, P ≤ 0.05. Twenty WMSI, 0.44 injuries/1000-h, were sustained over one-year. WMSI were 6 times more likely to occur in Segment-6, compared with other segments (incident rate ratio = 6.055, P = 0.031). The highest rate of TLinj and traumatic injuries also occurred in Segment-6, reflecting concentrated rehearsal, New York season and performances abroad. More overuse injuries occurred in Segment-2, an international tour, attributed to raked stages. Lack of methods to quantify performance other than injury may mask effects of touring on dancer's well-being. Tracking complaints permits understanding of stressors to specific body regions and healthcare utilisation; however, TLinj remain the most important injuries to track because they impact other dancers and organisational costs.

  11. Reliability of a Seven-Segment Foot Model with Medial and Lateral Midfoot and Forefoot Segments During Walking Gait.

    PubMed

    Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L

    2016-12-01

    In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.

  12. Clinical Significance of Reciprocal ST-Segment Changes in Patients With STEMI: A Cardiac Magnetic Resonance Imaging Study.

    PubMed

    Hwang, Ji-Won; Yang, Jeong Hoon; Song, Young Bin; Park, Taek Kyu; Lee, Joo Myung; Kim, Ji-Hwan; Jang, Woo Jin; Choi, Seung-Hyuk; Hahn, Joo-Yong; Choi, Jin-Ho; Ahn, Joonghyun; Carriere, Keumhee; Lee, Sang Hoon; Gwon, Hyeon-Cheol

    2018-02-22

    We sought to determine the association of reciprocal change in the ST-segment with myocardial injury assessed by cardiac magnetic resonance (CMR) in patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI). We performed CMR imaging in 244 patients who underwent primary PCI for their first STEMI; CMR was performed a median 3 days after primary PCI. The first electrocardiogram was analyzed, and patients were stratified according to the presence of reciprocal change. The primary outcome was infarct size measured by CMR. Secondary outcomes were area at risk and myocardial salvage index. Patients with reciprocal change (n=133, 54.5%) had a lower incidence of anterior infarction (27.8% vs 71.2%, P < .001) and shorter symptom onset to balloon time (221.5±169.8 vs 289.7±337.3min, P=.042). Using a multiple linear regression model, we found that patients with reciprocal change had a larger area at risk (P=.002) and a greater myocardial salvage index (P=.04) than patients without reciprocal change. Consequently, myocardial infarct size was not significantly different between the 2 groups (P=.14). The rate of major adverse cardiovascular events, including all-cause death, myocardial infarction, and repeat coronary revascularization, was similar between the 2 groups after 2 years of follow-up (P=.92). Reciprocal ST-segment change was associated with larger extent of ischemic myocardium at risk and more myocardial salvage but not with final infarct size or adverse clinical outcomes in STEMI patients undergoing primary PCI. Copyright © 2018 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  13. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  14. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model.

    PubMed

    Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent

    2010-04-01

    The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.

  15. Model-based segmentation of the facial nerve and chorda tympani in pediatric CT scans

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Noble, Jack H.; Rivas, Alejandro; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    In image-guided cochlear implant surgery an electrode array is implanted in the cochlea to treat hearing loss. Access to the cochlea is achieved by drilling from the outer skull to the cochlea through the facial recess, a region bounded by the facial nerve and the chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The effectiveness of traditional segmentation approaches to achieve this is severely limited because the facial nerve and chorda are small structures (~1 mm and ~0.3 mm in diameter, respectively) and exhibit poor image contrast. We have recently proposed a technique to achieve this task in adult patients, which relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work we use the same method to segment pediatric scans. We show that substantial differences exist between the anatomy of children and the anatomy of adults, which lead to poor segmentation results when an adult model is used to segment a pediatric volume. We have built a new model for pediatric cases and we have applied it to ten scans. A leave-one-out validation experiment was conducted in which manually segmented structures were compared to automatically segmented structures. The maximum segmentation error was 1 mm. This result indicates that accurate segmentation of the facial nerve and chorda in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.

  16. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  17. Fast and robust segmentation of the striatum using deep convolutional neural networks.

    PubMed

    Choi, Hongyoon; Jin, Kyong Hwan

    2016-12-01

    Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Semantic Image Segmentation with Contextual Hierarchical Models.

    PubMed

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  19. Trends in Global Vegetation Activity and Climatic Drivers Indicate a Decoupled Response to Climate Change

    PubMed Central

    Schut, Antonius G. T.; Ivits, Eva; Conijn, Jacob G.; ten Brink, Ben; Fensholt, Rasmus

    2015-01-01

    Detailed understanding of a possible decoupling between climatic drivers of plant productivity and the response of ecosystems vegetation is required. We compared trends in six NDVI metrics (1982–2010) derived from the GIMMS3g dataset with modelled biomass productivity and assessed uncertainty in trend estimates. Annual total biomass weight (TBW) was calculated with the LINPAC model. Trends were determined using a simple linear regression, a Thiel-Sen medium slope and a piecewise regression (PWR) with two segments. Values of NDVI metrics were related to Net Primary Production (MODIS-NPP) and TBW per biome and land-use type. The simple linear and Thiel-Sen trends did not differ much whereas PWR increased the fraction of explained variation, depending on the NDVI metric considered. A positive trend in TBW indicating more favorable climatic conditions was found for 24% of pixels on land, and for 5% a negative trend. A decoupled trend, indicating positive TBW trends and monotonic negative or segmented and negative NDVI trends, was observed for 17–36% of all productive areas depending on the NDVI metric used. For only 1–2% of all pixels in productive areas, a diverging and greening trend was found despite a strong negative trend in TBW. The choice of NDVI metric used strongly affected outcomes on regional scales and differences in the fraction of explained variation in MODIS-NPP between biomes were large, and a combination of NDVI metrics is recommended for global studies. We have found an increasing difference between trends in climatic drivers and observed NDVI for large parts of the globe. Our findings suggest that future scenarios must consider impacts of constraints on plant growth such as extremes in weather and nutrient availability to predict changes in NPP and CO2 sequestration capacity. PMID:26466347

  20. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  1. Eye Movements Reveal the Influence of Event Structure on Reading Behavior.

    PubMed

    Swets, Benjamin; Kurby, Christopher A

    2016-03-01

    When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading. Copyright © 2015 Cognitive Science Society, Inc.

  2. An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation

    NASA Astrophysics Data System (ADS)

    He, Fuliang; Guo, Yongcai; Gao, Chao

    2017-12-01

    Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.

  3. Modeling and clustering water demand patterns from real-world smart meter data

    NASA Astrophysics Data System (ADS)

    Cheifetz, Nicolas; Noumir, Zineb; Samé, Allou; Sandraz, Anne-Claire; Féliers, Cédric; Heim, Véronique

    2017-08-01

    Nowadays, drinking water utilities need an acute comprehension of the water demand on their distribution network, in order to efficiently operate the optimization of resources, manage billing and propose new customer services. With the emergence of smart grids, based on automated meter reading (AMR), a better understanding of the consumption modes is now accessible for smart cities with more granularities. In this context, this paper evaluates a novel methodology for identifying relevant usage profiles from the water consumption data produced by smart meters. The methodology is fully data-driven using the consumption time series which are seen as functions or curves observed with an hourly time step. First, a Fourier-based additive time series decomposition model is introduced to extract seasonal patterns from time series. These patterns are intended to represent the customer habits in terms of water consumption. Two functional clustering approaches are then used to classify the extracted seasonal patterns: the functional version of K-means, and the Fourier REgression Mixture (FReMix) model. The K-means approach produces a hard segmentation and K representative prototypes. On the other hand, the FReMix is a generative model and also produces K profiles as well as a soft segmentation based on the posterior probabilities. The proposed approach is applied to a smart grid deployed on the largest water distribution network (WDN) in France. The two clustering strategies are evaluated and compared. Finally, a realistic interpretation of the consumption habits is given for each cluster. The extensive experiments and the qualitative interpretation of the resulting clusters allow one to highlight the effectiveness of the proposed methodology.

  4. Extending a prototype knowledge- and object-based image analysis model to coarser spatial resolution imagery: an example from the Missouri River

    USGS Publications Warehouse

    Strong, Laurence L.

    2012-01-01

    A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.

  5. Extending a prototype knowledge and object based image analysis model to coarser spatial resolution imagery: an example from the Missouri River

    USGS Publications Warehouse

    Strong, Laurence L.

    2012-01-01

    A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.

  6. Prostate malignancy grading using gland-related shape descriptors

    NASA Astrophysics Data System (ADS)

    Braumann, Ulf-Dietrich; Scheibe, Patrick; Loeffler, Markus; Kristiansen, Glen; Wernert, Nicolas

    2014-03-01

    A proof-of-principle study was accomplished assessing the descriptive potential of two simple geometric measures (shape descriptors) applied to sets of segmented glands within images of 125 prostate cancer tissue sections. Respective measures addressing glandular shapes were (i) inverse solidity and (ii) inverse compactness. Using a classifier based on logistic regression, Gleason grades 3 and 4/5 could be differentiated with an accuracy of approx. 95%. Results suggest not only good discriminatory properties, but also robustness against gland segmentation variations. False classifications in part were caused by inadvertent Gleason grade assignments, as a-posteriori re-inspections had turned out.

  7. The ASAC Flight Segment and Network Cost Models

    NASA Technical Reports Server (NTRS)

    Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.

    1997-01-01

    To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.

  8. Identification and experimental validation of damping ratios of different human body segments through anthropometric vibratory model in standing posture.

    PubMed

    Gupta, T C

    2007-08-01

    A 15 degrees of freedom lumped parameter vibratory model of human body is developed, for vertical mode vibrations, using anthropometric data of the 50th percentile US male. The mass and stiffness of various segments are determined from the elastic modulii of bones and tissues and from the anthropometric data available, assuming the shape of all the segments is ellipsoidal. The damping ratio of each segment is estimated on the basis of the physical structure of the body in a particular posture. Damping constants of various segments are calculated from these damping ratios. The human body is modeled as a linear spring-mass-damper system. The optimal values of the damping ratios of the body segments are estimated, for the 15 degrees of freedom model of the 50th percentile US male, by comparing the response of the model with the experimental response. Formulating a similar vibratory model of the 50th percentile Indian male and comparing the frequency response of the model with the experimental response of the same group of subjects validate the modeling procedure. A range of damping ratios has been considered to develop a vibratory model, which can predict the vertical harmonic response of the human body.

  9. Three-dimensional quadratic modeling and quantitative evaluation of the diaphragm on a volumetric CT scan in patients with chronic obstructive pulmonary disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yongjun

    Purpose: In patients with chronic obstructive pulmonary disease (COPD), diaphragm function may deteriorate due to reduced muscle fiber length. Quantitative analysis of the morphology of the diaphragm is therefore important. In the authors current study, they propose a diaphragm segmentation method for COPD patients that uses volumetric chest computed tomography (CT) data, and they provide a quantitative analysis of the diaphragmatic dimensions. Methods: Volumetric CT data were obtained from 30 COPD patients and 10 normal control patients using a 16-row multidetector CT scanner (Siemens Sensation 16) with 0.75-mm collimation. Diaphragm segmentation using 3D ray projections on the lower surface ofmore » the lungs was performed to identify the draft diaphragmatic lung surface, which was modeled using quadratic 3D surface fitting and robust regression in order to minimize the effects of segmentation error and parameterize diaphragm morphology. This result was visually evaluated by an expert thoracic radiologist. To take into consideration the shape features of the diaphragm, several quantification parameters—including the shape index on the apex (SIA) (which was computed using gradient set to 0), principal curvatures on the apex on the fitted diaphragm surface (CA), the height between the apex and the base plane (H), the diaphragm lengths along the x-, y-, and z-axes (XL, YL, ZL), quadratic-fitted diaphragm lengths on the z-axis (FZL), average curvature (C), and surface area (SA)—were measured using in-house software and compared with the pulmonary function test (PFT) results. Results: The overall accuracy of the combined segmentation method was 97.22% ± 4.44% while the visual accuracy of the models for the segmented diaphragms was 95.28% ± 2.52% (mean ± SD). The quantitative parameters, including SIA, CA, H, XL, YL, ZL, FZL, C, and SA were 0.85 ± 0.05 (mm{sup −1}), 0.01 ± 0.00 (mm{sup −1}), 17.93 ± 10.78 (mm), 129.80 ± 11.66 (mm), 163.19 ± 13.45 (mm), 71.27 ± 17.52 (mm), 61.59 ± 16.98 (mm), 0.01 ± 0.00 (mm{sup −1}), and 34 380.75 ± 6680.06 (mm{sup 2}), respectively. Several parameters were correlated with the PFT parameters. Conclusions: The authors propose an automatic method for quantitatively evaluating the morphological parameters of the diaphragm on volumetric chest CT in COPD patients. By measuring not only the conventional length and surface area but also the shape features of the diaphragm using quadratic 3D surface modeling, the proposed method is especially useful for quantifying diaphragm characteristics. Their method may be useful for assessing morphological diaphragmatic changes in COPD patients.« less

  10. Moving vehicles segmentation based on Gaussian motion model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.

    2005-07-01

    Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.

  11. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields

    PubMed Central

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674

  12. Three-Dimensional Reconstruction of Coronary Arteries and Its Application in Localization of Coronary Artery Segments Corresponding to Myocardial Segments Identified by Transthoracic Echocardiography

    PubMed Central

    Zhong, Chunyan; Guo, Yanli; Huang, Haiyun; Tan, Liwen; Wu, Yi; Wang, Wenting

    2013-01-01

    Objectives. To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE). Methods. Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG). Results. 3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P < 0.05) and in the mild-to-moderate stenosis group (P < 0.05). Conclusions. These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD. PMID:24348745

  13. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    PubMed

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  14. Segmentation in low-penetration and low-involvement categories: an application to lottery games.

    PubMed

    Guesalaga, Rodrigo; Marshall, Pablo

    2013-09-01

    Market segmentation is accepted as a fundamental concept in marketing and several authors have recently proposed a segmentation model where personal and environmental variables intersect with each other to form motivating conditions that drive behavior and preferences. This model of segmentation has been applied to packaged goods. This paper extends this literature by proposing a segmentation model for low-penetration and low involvement (LP-LI) products. An application to the lottery games in Chile supports the proposed model. The results of the study show that in this type of products (LP-LI), the attitude towards the product category is the most important factor that distinguishes consumers from non consumers, and heavy users from light users, and consequently, a critical segmentation variable. In addition, a cluster analysis shows the existence of three segments: (1) the impulsive dreamers, who believe in chance, and in that lottery games can change their life, (2) the skeptical, that do not believe in chance, nor in that lottery games can change their life and (3) the willing, who value the benefits of playing.

  15. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  16. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  17. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  18. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  19. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.

    PubMed

    Zhao, Xiaomei; Wu, Yihong; Song, Guidong; Li, Zhenye; Zhang, Yazhuo; Fan, Yong

    2018-01-01

    Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.

    2017-05-01

    This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.

  1. Setting a good example: supervisors as work-life-friendly role models within the context of boundary management.

    PubMed

    Koch, Anna R; Binnewies, Carmen

    2015-01-01

    This multisource, multilevel study examined the importance of supervisors as work-life-friendly role models for employees' boundary management. Particularly, we tested whether supervisors' work-home segmentation behavior represents work-life-friendly role modeling for their employees. Furthermore, we tested whether work-life-friendly role modeling is positively related to employees' work-home segmentation behavior. Also, we examined whether work-life-friendly role modeling is positively related to employees' well-being in terms of feeling less exhausted and disengaged. In total, 237 employees and their 75 supervisors participated in our study. Results from hierarchical linear models revealed that supervisors who showed more segmentation behavior to separate work and home were more likely perceived as work-life-friendly role models. Employees with work-life-friendly role models were more likely to segment between work and home, and they felt less exhausted and disengaged. One may conclude that supervisors as work-life-friendly role models are highly important for employees' work-home segmentation behavior and gatekeepers to implement a work-life-friendly organizational culture. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman M.; Zaremba, Marek B.

    2003-03-01

    Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.

  3. Preoperative Biometric Parameters Predict the Vault after ICL Implantation: A Retrospective Clinical Study.

    PubMed

    Zheng, Qian-Yin; Xu, Wen; Liang, Guan-Lu; Wu, Jing; Shi, Jun-Ting

    2016-01-01

    To investigate the correlation between the preoperative biometric parameters of the anterior segment and the vault after implantable Collamer lens (ICL) implantation via this retrospective study. Retrospective clinical study. A total of 78 eyes from 41 patients who underwent ICL implantation surgery were included in this study. Preoperative biometric parameters, including white-to-white (WTW) diameter, central corneal thickness, keratometer, pupil diameter, anterior chamber depth, sulcus-to-sulcus diameter, anterior chamber area (ACA) and central curvature radius of the anterior surface of the lens (Lenscur), were measured. Lenscur and ACA were measured with Rhinoceros 5.0 software on the image scanned with ultrasound biomicroscopy (UBM). The vault was assessed by UBM 3 months after surgery. Multiple stepwise regression analysis was employed to identify the variables that were correlated with the vault. The results showed that the vault was correlated with 3 variables: ACA (22.4 ± 4.25 mm2), WTW (11.36 ± 0.29 mm) and Lenscur (9.15 ± 1.21 mm). The regressive equation was: vault (mm) = 1.785 + 0.017 × ACA + 0.051 × Lenscur - 0.203 × WTW. Biometric parameters of the anterior segment (ACA, WTW and Lenscur) can predict the vault after ICL implantation using a new regression equation. © 2016 The Author(s) Published by S. Karger AG, Basel.

  4. Prostate segmentation in MRI using a convolutional neural network architecture and training strategy based on statistical shape models.

    PubMed

    Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E

    2018-05-15

    Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.

  5. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    PubMed

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  6. Post-embryonic development in the mite suborder Opilioacarida, with notes on segmental homology in Parasitiformes (Arachnida).

    PubMed

    Klompen, Hans; Vázquez, Ma Magdalena; Bernardi, Leopoldo Ferreira de Oliveira

    2015-10-01

    In order to study homology among the major lineages of the mite (super)order Parasitiformes, developmental patterns in Opilioacarida are documented, emphasizing morphology of the earliest, post-embryonic instars. Developmental patterns are summarized for all external body structures, based on examination of material in four different genera. Development includes an egg, a 6-legged prelarva and larva, three 8-legged nymphal instars, and the adults, for the most complete ontogenetic sequence in Parasitiformes. The prelarva and larva appear to be non-feeding. Examination of cuticular structures over ontogeny allows development of an updated model for body segmentation and sensillar distribution patterns in Opilioacarida. This model includes a body made up of a well-developed ocular segment plus at most 17 additional segments. In the larvae and protonymphs each segment may carry up to six pairs of sensilla (setae or lyrifissures) arranged is distinct series (J, Z, S, Sv, Zv, Jv). The post-protonymphal instars add two more series (R and Rv) but no extra segments. This basic model is compatible with sensillar patterns in other Parasitiformes, leading to the hypothesis that all taxa in that (super)order may have the same segmental ground plan. The substantial segmental distortion implied in the model can be explained using a single process involving differential growth in the coxal regions of all appendage-bearing segments.

  7. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times

    NASA Astrophysics Data System (ADS)

    Edmund, Jens M.; Kjer, Hans M.; Van Leemput, Koen; Hansen, Rasmus H.; Andersen, Jon AL; Andreasen, Daniel

    2014-12-01

    Radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, so-called MRI-only RT, would remove the systematic registration error between MR and computed tomography (CT), and provide co-registered MRI for assessment of treatment response and adaptive RT. Electron densities, however, need to be assigned to the MRI images for dose calculation and patient setup based on digitally reconstructed radiographs (DRRs). Here, we investigate the geometric and dosimetric performance for a number of popular voxel-based methods to generate a so-called pseudo CT (pCT). Five patients receiving cranial irradiation, each containing a co-registered MRI and CT scan, were included. An ultra short echo time MRI sequence for bone visualization was used. Six methods were investigated for three popular types of voxel-based approaches; (1) threshold-based segmentation, (2) Bayesian segmentation and (3) statistical regression. Each approach contained two methods. Approach 1 used bulk density assignment of MRI voxels into air, soft tissue and bone based on logical masks and the transverse relaxation time T2 of the bone. Approach 2 used similar bulk density assignments with Bayesian statistics including or excluding additional spatial information. Approach 3 used a statistical regression correlating MRI voxels with their corresponding CT voxels. A similar photon and proton treatment plan was generated for a target positioned between the nasal cavity and the brainstem for all patients. The CT agreement with the pCT of each method was quantified and compared with the other methods geometrically and dosimetrically using both a number of reported metrics and introducing some novel metrics. The best geometrical agreement with CT was obtained with the statistical regression methods which performed significantly better than the threshold and Bayesian segmentation methods (excluding spatial information). All methods agreed significantly better with CT than a reference water MRI comparison. The mean dosimetric deviation for photons and protons compared to the CT was about 2% and highest in the gradient dose region of the brainstem. Both the threshold based method and the statistical regression methods showed the highest dosimetrical agreement. Generation of pCTs using statistical regression seems to be the most promising candidate for MRI-only RT of the brain. Further, the total amount of different tissues needs to be taken into account for dosimetric considerations regardless of their correct geometrical position.

  9. Influence of "J"-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion.

    PubMed

    Wang, Runxiao; Zhao, Wentao; Li, Shujun; Zhang, Shunqi

    2016-01-01

    Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of "J"-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with "J"-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with "J"-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s -1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with "J"-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness.

  10. Influence of “J”-Curve Spring Stiffness on Running Speeds of Segmented Legs during High-Speed Locomotion

    PubMed Central

    2016-01-01

    Both the linear leg spring model and the two-segment leg model with constant spring stiffness have been broadly used as template models to investigate bouncing gaits for legged robots with compliant legs. In addition to these two models, the other stiffness leg spring models developed using inspiration from biological characteristic have the potential to improve high-speed running capacity of spring-legged robots. In this paper, we investigate the effects of “J”-curve spring stiffness inspired by biological materials on running speeds of segmented legs during high-speed locomotion. Mathematical formulation of the relationship between the virtual leg force and the virtual leg compression is established. When the SLIP model and the two-segment leg model with constant spring stiffness and with “J”-curve spring stiffness have the same dimensionless reference stiffness, the two-segment leg model with “J”-curve spring stiffness reveals that (1) both the largest tolerated range of running speeds and the tolerated maximum running speed are found and (2) at fast running speed from 25 to 40/92 m s−1 both the tolerated range of landing angle and the stability region are the largest. It is suggested that the two-segment leg model with “J”-curve spring stiffness is more advantageous for high-speed running compared with the SLIP model and with constant spring stiffness. PMID:28018127

  11. A Robust and Fast Method for Sidescan Sonar Image Segmentation Using Nonlocal Despeckling and Active Contour Model.

    PubMed

    Huo, Guanying; Yang, Simon X; Li, Qingwu; Zhou, Yan

    2017-04-01

    Sidescan sonar image segmentation is a very important issue in underwater object detection and recognition. In this paper, a robust and fast method for sidescan sonar image segmentation is proposed, which deals with both speckle noise and intensity inhomogeneity that may cause considerable difficulties in image segmentation. The proposed method integrates the nonlocal means-based speckle filtering (NLMSF), coarse segmentation using k -means clustering, and fine segmentation using an improved region-scalable fitting (RSF) model. The NLMSF is used before the segmentation to effectively remove speckle noise while preserving meaningful details such as edges and fine features, which can make the segmentation easier and more accurate. After despeckling, a coarse segmentation is obtained by using k -means clustering, which can reduce the number of iterations. In the fine segmentation, to better deal with possible intensity inhomogeneity, an edge-driven constraint is combined with the RSF model, which can not only accelerate the convergence speed but also avoid trapping into local minima. The proposed method has been successfully applied to both noisy and inhomogeneous sonar images. Experimental and comparative results on real and synthetic sonar images demonstrate that the proposed method is robust against noise and intensity inhomogeneity, and is also fast and accurate.

  12. 3D Segmentation of Maxilla in Cone-beam Computed Tomography Imaging Using Base Invariant Wavelet Active Shape Model on Customized Two-manifold Topology

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Yuan, Peng; Kuo, Tai-Hong; Xiong, Zixiang; Gateno, Jaime; Zhou, Xiaobo

    2013-01-01

    Recent advances in cone-beam computed tomography (CBCT) have rapidly enabled widepsread applications of dentomaxillofacial imaging and orthodontic practices in the past decades due to its low radiation dose, high spatial resolution, and accessibility. However, low contrast resolution in CBCT image has become its major limitation in building skull models. Intensive hand-segmentation is usually required to reconstruct the skull models. One of the regions affected by this limitation the most is the thin bone images. This paper presents a novel segmentation approach based on wavelet density model (WDM) for a particular interest in the outer surface of anterior wall of maxilla. Nineteen CBCT datasets are used to conduct two experiments. This mode-based segmentation approach is validated and compared with three different segmentation approaches. The results show that the performance of this model-based segmentation approach is better than those of the other approaches. It can achieve 0.25 ± 0.2mm of surface error from ground truth of bone surface. PMID:23694914

  13. Shortest-path constraints for 3D multiobject semiautomatic segmentation via clustering and Graph Cut.

    PubMed

    Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy

    2013-11-01

    We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.

  14. Evaluation of body weight of sea cucumber Apostichopus japonicus by computer vision

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Xu, Qiang; Liu, Shilin; Zhang, Libin; Yang, Hongsheng

    2015-01-01

    A postichopus japonicus (Holothuroidea, Echinodermata) is an ecological and economic species in East Asia. Conventional biometric monitoring method includes diving for samples and weighing above water, with highly variable in weight measurement due to variation in the quantity of water in the respiratory tree and intestinal content of this species. Recently, video survey method has been applied widely in biometric detection on underwater benthos. However, because of the high flexibility of A. japonicus body, video survey method of monitoring is less used in sea cucumber. In this study, we designed a model to evaluate the wet weight of A. japonicus, using machine vision technology combined with a support vector machine (SVM) that can be used in field surveys on the A. japonicus population. Continuous dorsal images of free-moving A. japonicus individuals in seawater were captured, which also allows for the development of images of the core body edge as well as thorn segmentation. Parameters that include body length, body breadth, perimeter and area, were extracted from the core body edge images and used in SVM regression, to predict the weight of A. japonicus and for comparison with a power model. Results indicate that the use of SVM for predicting the weight of 33 A. japonicus individuals is accurate ( R 2=0.99) and compatible with the power model ( R 2 =0.96). The image-based analysis and size-weight regression models in this study may be useful in body weight evaluation of A. japonicus in lab and field study.

  15. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  16. [RSF model optimization and its application to brain tumor segmentation in MRI].

    PubMed

    Cheng, Zhaoning; Song, Zhijian

    2013-04-01

    Magnetic resonance imaging (MRI) is usually obscure and non-uniform in gray, and the tumors inside are poorly circumscribed, hence the automatic tumor segmentation in MRI is very difficult. Region-scalable fitting (RSF) energy model is a new segmentation approach for some uneven grayscale images. However, the level set formulation (LSF) of RSF model is not suitable for the environment with different grey level distribution inside and outside the intial contour, and the complex intensity environment of MRI always makes it hard to get ideal segmentation results. Therefore, we improved the model by a new LSF and combined it with the mean shift method, which can be helpful for tumor segmentation and has better convergence and target direction. The proposed method has been utilized in a series of studies for real MRI images, and the results showed that it could realize fast, accurate and robust segmentations for brain tumors in MRI, which has great clinical significance.

  17. Child Schooling in Ethiopia: The Role of Maternal Autonomy

    PubMed Central

    Mohanty, Itismita

    2016-01-01

    This paper examines the effects of maternal autonomy on child schooling outcomes in Ethiopia using a nationally representative Ethiopian Demographic and Health survey for 2011. The empirical strategy uses a Hurdle Negative Binomial Regression model to estimate years of schooling. An ordered probit model is also estimated to examine age grade distortion using a trichotomous dependent variable that captures three states of child schooling. The large sample size and the range of questions available in this dataset allow us to explore the influence of individual and household level social, economic and cultural factors on child schooling. The analysis finds statistically significant effects of maternal autonomy variables on child schooling in Ethiopia. The roles of maternal autonomy and other household-level factors on child schooling are important issues in Ethiopia, where health and education outcomes are poor for large segments of the population. PMID:27942039

  18. A Parametric Finite-Element Model for Evaluating Segmented Mirrors with Discrete, Edgewise Connectivity

    NASA Technical Reports Server (NTRS)

    Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip

    2011-01-01

    Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope

  19. Automated Bone Segmentation and Surface Evaluation of a Small Animal Model of Post-Traumatic Osteoarthritis.

    PubMed

    Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D

    2017-05-01

    MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.

  20. A comparative study on dynamic stiffness in typical finite element model and multi-body model of C6-C7 cervical spine segment.

    PubMed

    Wang, Yawei; Wang, Lizhen; Du, Chengfei; Mo, Zhongjun; Fan, Yubo

    2016-06-01

    In contrast to numerous researches on static or quasi-static stiffness of cervical spine segments, very few investigations on their dynamic stiffness were published. Currently, scale factors and estimated coefficients were usually used in multi-body models for including viscoelastic properties and damping effects, meanwhile viscoelastic properties of some tissues were unavailable for establishing finite element models. Because dynamic stiffness of cervical spine segments in these models were difficult to validate because of lacking in experimental data, we tried to gain some insights on current modeling methods through studying dynamic stiffness differences between these models. A finite element model and a multi-body model of C6-C7 segment were developed through using available material data and typical modeling technologies. These two models were validated with quasi-static response data of the C6-C7 cervical spine segment. Dynamic stiffness differences were investigated through controlling motions of C6 vertebrae at different rates and then comparing their reaction forces or moments. Validation results showed that both the finite element model and the multi-body model could generate reasonable responses under quasi-static loads, but the finite element segment model exhibited more nonlinear characters. Dynamic response investigations indicated that dynamic stiffness of this finite element model might be underestimated because of the absence of dynamic stiffen effect and damping effects of annulus fibrous, while representation of these effects also need to be improved in current multi-body model. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Leaf Segmentation and Tracking in Arabidopsis thaliana Combined to an Organ-Scale Plant Model for Genotypic Differentiation

    PubMed Central

    Viaud, Gautier; Loudet, Olivier; Cournède, Paul-Henry

    2017-01-01

    A promising method for characterizing the phenotype of a plant as an interaction between its genotype and its environment is to use refined organ-scale plant growth models that use the observation of architectural traits, such as leaf area, containing a lot of information on the whole history of the functioning of the plant. The Phenoscope, a high-throughput automated platform, allowed the acquisition of zenithal images of Arabidopsis thaliana over twenty one days for 4 different genotypes. A novel image processing algorithm involving both segmentation and tracking of the plant leaves allows to extract areas of the latter. First, all the images in the series are segmented independently using a watershed-based approach. A second step based on ellipsoid-shaped leaves is then applied on the segments found to refine the segmentation. Taking into account all the segments at every time, the whole history of each leaf is reconstructed by choosing recursively through time the most probable segment achieving the best score, computed using some characteristics of the segment such as its orientation, its distance to the plant mass center and its area. These results are compared to manually extracted segments, showing a very good accordance in leaf rank and that they therefore provide low-biased data in large quantity for leaf areas. Such data can therefore be exploited to design an organ-scale plant model adapted from the existing GreenLab model for A. thaliana and subsequently parameterize it. This calibration of the model parameters should pave the way for differentiation between the Arabidopsis genotypes. PMID:28123392

  2. Bike and run pacing on downhill segments predict Ironman triathlon relative success.

    PubMed

    Johnson, Evan C; Pryor, J Luke; Casa, Douglas J; Belval, Luke N; Vance, James S; DeMartini, Julie K; Maresh, Carl M; Armstrong, Lawrence E

    2015-01-01

    Determine if performance and physiological based pacing characteristics over the varied terrain of a triathlon predicted relative bike, run, and/or overall success. Poor self-regulation of intensity during long distance (Full Iron) triathlon can manifest in adverse discontinuities in performance. Observational study of a random sample of Ironman World Championship athletes. High performing and low performing groups were established upon race completion. Participants wore global positioning system and heart rate enabled watches during the race. Percentage difference from pre-race disclosed goal pace (%off) and mean HR were calculated for nine segments of the bike and 11 segments of the run. Normalized graded running pace (accounting for changes in elevation) was computed via analysis software. Step-wise regression analyses identified segments predictive of relative success and HP and LP were compared at these segments to confirm importance. %Off of goal velocity during two downhill segments of the bike (HP: -6.8±3.2%, -14.2±2.6% versus LP: -1.2±4.2%, -5.1±11.5%; p<0.020) and %off from NGP during one downhill segment of the run (HP: 4.8±5.2% versus LP: 33.3±38.7%; p=0.033) significantly predicted relative performance. Also, HP displayed more consistency in mean HR (141±12 to 138±11 bpm) compared to LP (139±17 to 131±16 bpm; p=0.019) over the climb and descent from the turn-around point during the bike component. Athletes who maintained faster relative speeds on downhill segments, and who had smaller changes in HR between consecutive up and downhill segments were more successful relative to their goal times. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Characterizing outcome preferences in patients with psychotic disorders: a discrete choice conjoint experiment.

    PubMed

    Zipursky, Robert B; Cunningham, Charles E; Stewart, Bailey; Rimas, Heather; Cole, Emily; Vaz, Stephanie McDermid

    2017-07-01

    The majority of individuals with schizophrenia will achieve a remission of psychotic symptoms, but few will meet criteria for recovery. Little is known about what outcomes are important to patients. We carried out a discrete choice experiment to characterize the outcome preferences of patients with psychotic disorders. Participants (N=300) were recruited from two clinics specializing in psychotic disorders. Twelve outcomes were each defined at three levels and incorporated into a computerized survey with 15 choice tasks. Utility values and importance scores were calculated for each outcome level. Latent class analysis was carried out to determine whether participants were distributed into segments with different preferences. Multinomial logistic regression was used to identify predictors of segment membership. Latent class analysis revealed three segments of respondents. The first segment (48%), which we labeled "Achievement-focused," preferred to have a full-time job, to live independently, to be in a long-term relationship, and to have no psychotic symptoms. The second segment (29%), labeled "Stability-focused," preferred to not have a job, to live independently, and to have some ongoing psychotic symptoms. The third segment (23%), labeled "Health-focused," preferred to not have a job, to live in supervised housing, and to have no psychotic symptoms. Segment membership was predicted by education, socioeconomic status, psychotic symptom severity, and work status. This study has revealed that patients with psychotic disorders are distributed between segments with different outcome preferences. New approaches to improve outcomes for patients with psychotic disorders should be informed by a greater understanding of patient preferences and priorities. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington’s Disease

    PubMed Central

    Johnson, Eileanoir B.; Gregory, Sarah; Johnson, Hans J.; Durr, Alexandra; Leavitt, Blair R.; Roos, Raymund A.; Rees, Geraint; Tabrizi, Sarah J.; Scahill, Rachael I.

    2017-01-01

    The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington’s disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software. PMID:29066997

  5. Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington's Disease.

    PubMed

    Johnson, Eileanoir B; Gregory, Sarah; Johnson, Hans J; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund A; Rees, Geraint; Tabrizi, Sarah J; Scahill, Rachael I

    2017-01-01

    The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington's disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software.

  6. Two and three-dimensional segmentation of hyperpolarized 3He magnetic resonance imaging of pulmonary gas distribution

    NASA Astrophysics Data System (ADS)

    Heydarian, Mohammadreza; Kirby, Miranda; Wheatley, Andrew; Fenster, Aaron; Parraga, Grace

    2012-03-01

    A semi-automated method for generating hyperpolarized helium-3 (3He) measurements of individual slice (2D) or whole lung (3D) gas distribution was developed. 3He MRI functional images were segmented using two-dimensional (2D) and three-dimensional (3D) hierarchical K-means clustering of the 3He MRI signal and in addition a seeded region-growing algorithm was employed for segmentation of the 1H MRI thoracic cavity volume. 3He MRI pulmonary function measurements were generated following two-dimensional landmark-based non-rigid registration of the 3He and 1H pulmonary images. We applied this method to MRI of healthy subjects and subjects with chronic obstructive lung disease (COPD). The results of hierarchical K-means 2D and 3D segmentation were compared to an expert observer's manual segmentation results using linear regression, Pearson correlations and the Dice similarity coefficient. 2D hierarchical K-means segmentation of ventilation volume (VV) and ventilation defect volume (VDV) was strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.97, p<.0001) and mean Dice coefficients were greater than 92% for all subjects. 3D hierarchical K-means segmentation of VV and VDV was also strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.64, p<.0001) and the mean Dice coefficients were greater than 91% for all subjects. Both 2D and 3D semi-automated segmentation of 3He MRI gas distribution provides a way to generate novel pulmonary function measurements.

  7. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation.

    PubMed

    Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung

    2015-08-19

    In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.

  8. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation

    PubMed Central

    Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung

    2015-01-01

    In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions. PMID:26295395

  9. Anatomical modeling of the bronchial tree

    NASA Astrophysics Data System (ADS)

    Hentschel, Gerrit; Klinder, Tobias; Blaffert, Thomas; Bülow, Thomas; Wiemker, Rafael; Lorenz, Cristian

    2010-02-01

    The bronchial tree is of direct clinical importance in the context of respective diseases, such as chronic obstructive pulmonary disease (COPD). It furthermore constitutes a reference structure for object localization in the lungs and it finally provides access to lung tissue in, e.g., bronchoscope based procedures for diagnosis and therapy. This paper presents a comprehensive anatomical model for the bronchial tree, including statistics of position, relative and absolute orientation, length, and radius of 34 bronchial segments, going beyond previously published results. The model has been built from 16 manually annotated CT scans, covering several branching variants. The model is represented as a centerline/tree structure but can also be converted in a surface representation. Possible model applications are either to anatomically label extracted bronchial trees or to improve the tree extraction itself by identifying missing segments or sub-trees, e.g., if located beyond a bronchial stenosis. Bronchial tree labeling is achieved using a naïve Bayesian classifier based on the segment properties contained in the model in combination with tree matching. The tree matching step makes use of branching variations covered by the model. An evaluation of the model has been performed in a leaveone- out manner. In total, 87% of the branches resulting from preceding airway tree segmentation could be correctly labeled. The individualized model enables the detection of missing branches, allowing a targeted search, e.g., a local rerun of the tree-segmentation segmentation.

  10. Phytotoxicity and accumulation of chromium in carrot plants and the derivation of soil thresholds for Chinese soils.

    PubMed

    Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang

    2014-10-01

    Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    PubMed

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  12. A computational model for simulating solute transport and oxygen consumption along the nephrons

    PubMed Central

    Vallon, Volker; Edwards, Aurélie

    2016-01-01

    The goal of this study was to investigate water and solute transport, with a focus on sodium transport (TNa) and metabolism along individual nephron segments under differing physiological and pathophysiological conditions. To accomplish this goal, we developed a computational model of solute transport and oxygen consumption (QO2) along different nephron populations of a rat kidney. The model represents detailed epithelial and paracellular transport processes along both the superficial and juxtamedullary nephrons, with the loop of Henle of each model nephron extending to differing depths of the inner medulla. We used the model to assess how changes in TNa may alter QO2 in different nephron segments and how shifting the TNa sites alters overall kidney QO2. Under baseline conditions, the model predicted a whole kidney TNa/QO2, which denotes the number of moles of Na+ reabsorbed per moles of O2 consumed, of ∼15, with TNa efficiency predicted to be significantly greater in cortical nephron segments than in medullary segments. The TNa/QO2 ratio was generally similar among the superficial and juxtamedullary nephron segments, except for the proximal tubule, where TNa/QO2 was ∼20% higher in superficial nephrons, due to the larger luminal flow along the juxtamedullary proximal tubules and the resulting higher, flow-induced transcellular transport. Moreover, the model predicted that an increase in single-nephron glomerular filtration rate does not significantly affect TNa/QO2 in the proximal tubules but generally increases TNa/QO2 along downstream segments. The latter result can be attributed to the generally higher luminal [Na+], which raises paracellular TNa. Consequently, vulnerable medullary segments, such as the S3 segment and medullary thick ascending limb, may be relatively protected from flow-induced increases in QO2 under pathophysiological conditions. PMID:27707705

  13. Deaf College Students' Mathematical Skills Relative to Morphological Knowledge, Reading Level, and Language Proficiency

    ERIC Educational Resources Information Center

    Kelly, Ronald R.; Gaustad, Martha G.

    2007-01-01

    This study of deaf college students examined specific relationships between their mathematics performance and their assessed skills in reading, language, and English morphology. Simple regression analyses showed that deaf college students' language proficiency scores, reading grade level, and morphological knowledge regarding word segmentation and…

  14. Capture, Learning, and Classification of Upper Extremity Movement Primitives in Healthy Controls and Stroke Patients

    PubMed Central

    Guerra, Jorge; Uddin, Jasim; Nilsen, Dawn; Mclnerney, James; Fadoo, Ammarah; Omofuma, Isirame B.; Hughes, Shatif; Agrawal, Sunil; Allen, Peter; Schambra, Heidi M.

    2017-01-01

    There currently exist no practical tools to identify functional movements in the upper extremities (UEs). This absence has limited the precise therapeutic dosing of patients recovering from stroke. In this proof-of-principle study, we aimed to develop an accurate approach for classifying UE functional movement primitives, which comprise functional movements. Data were generated from inertial measurement units (IMUs) placed on upper body segments of older healthy individuals and chronic stroke patients. Subjects performed activities commonly trained during rehabilitation after stroke. Data processing involved the use of a sliding window to obtain statistical descriptors, and resulting features were processed by a Hidden Markov Model (HMM). The likelihoods of the states, resulting from the HMM, were segmented by a second sliding window and their averages were calculated. The final predictions were mapped to human functional movement primitives using a Logistic Regression algorithm. Algorithm performance was assessed with a leave-one-out analysis, which determined its sensitivity, specificity, and positive and negative predictive values for all classified primitives. In healthy control and stroke participants, our approach identified functional movement primitives embedded in training activities with, on average, 80% precision. This approach may support functional movement dosing in stroke rehabilitation. PMID:28813877

  15. The Mental Health Parity and Addiction Equity Act evaluation study: Impact on specialty behavioral health utilization and expenditures among "carve-out" enrollees.

    PubMed

    Ettner, Susan L; M Harwood, Jessica; Thalmayer, Amber; Ong, Michael K; Xu, Haiyong; Bresolin, Michael J; Wells, Kenneth B; Tseng, Chi-Hong; Azocar, Francisca

    2016-12-01

    Interrupted time series with and without controls was used to evaluate whether the federal Mental Health Parity and Addiction Equity Act (MHPAEA) and its Interim Final Rule increased the probability of specialty behavioral health treatment and levels of utilization and expenditures among patients receiving treatment. Linked insurance claims, eligibility, plan and employer data from 2008 to 2013 were used to estimate segmented regression analyses, allowing for level and slope changes during the transition (2010) and post-MHPAEA (2011-2013) periods. The sample included 1,812,541 individuals ages 27-64 (49,968,367 person-months) in 10,010 Optum "carve-out" plans. Two-part regression models with Generalized Estimating Equations were used to estimate expenditures by payer and outpatient, intermediate and inpatient service use. We found little evidence that MHPAEA increased utilization significantly, but somewhat more robust evidence that costs shifted from patients to plans. Thus the primary impact of MHPAEA among carve-out enrollees may have been a reduction in patient financial burden. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Image segmentation on adaptive edge-preserving smoothing

    NASA Astrophysics Data System (ADS)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  17. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    PubMed

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  18. GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

    PubMed

    Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos

    2016-01-01

    We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

  19. Automatic Cell Segmentation Using a Shape-Classification Model in Immunohistochemically Stained Cytological Images

    NASA Astrophysics Data System (ADS)

    Shah, Shishir

    This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.

  20. Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership

    PubMed Central

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-01-01

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718

  1. Description and Evaluation of Numerical Groundwater Flow Models for the Edwards Aquifer, South-Central Texas

    USGS Publications Warehouse

    Lindgren, Richard J.; Taylor, Charles J.; Houston, Natalie A.

    2009-01-01

    A substantial number of public water system wells in south-central Texas withdraw groundwater from the karstic, highly productive Edwards aquifer. However, the use of numerical groundwater flow models to aid in the delineation of contributing areas for public water system wells in the Edwards aquifer is problematic because of the complex hydrogeologic framework and the presence of conduit-dominated flow paths in the aquifer. The U.S. Geological Survey, in cooperation with the Texas Commission on Environmental Quality, evaluated six published numerical groundwater flow models (all deterministic) that have been developed for the Edwards aquifer San Antonio segment or Barton Springs segment, or both. This report describes the models developed and evaluates each with respect to accessibility and ease of use, range of conditions simulated, accuracy of simulations, agreement with dye-tracer tests, and limitations of the models. These models are (1) GWSIM model of the San Antonio segment, a FORTRAN computer-model code that pre-dates the development of MODFLOW; (2) MODFLOW conduit-flow model of San Antonio and Barton Springs segments; (3) MODFLOW diffuse-flow model of San Antonio and Barton Springs segments; (4) MODFLOW Groundwater Availability Modeling [GAM] model of the Barton Springs segment; (5) MODFLOW recalibrated GAM model of the Barton Springs segment; and (6) MODFLOW-DCM (dual conductivity model) conduit model of the Barton Springs segment. The GWSIM model code is not commercially available, is limited in its application to the San Antonio segment of the Edwards aquifer, and lacks the ability of MODFLOW to easily incorporate newly developed processes and packages to better simulate hydrologic processes. MODFLOW is a widely used and tested code for numerical modeling of groundwater flow, is well documented, and is in the public domain. These attributes make MODFLOW a preferred code with regard to accessibility and ease of use. The MODFLOW conduit-flow model incorporates improvements over previous models by using (1) a user-friendly interface, (2) updated computer codes (MODFLOW-96 and MODFLOW-2000), (3) a finer grid resolution, (4) less-restrictive boundary conditions, (5) an improved discretization of hydraulic conductivity, (6) more accurate estimates of pumping stresses, (7) a long transient simulation period (54 years, 1947-2000), and (8) a refined representation of high-permeability zones or conduits. All of the models except the MODFLOW-DCM conduit model have limitations resulting from the use of Darcy's law to simulate groundwater flow in a karst aquifer system where non-Darcian, turbulent flow might actually dominate. The MODFLOW-DCM conduit model is an improvement in the ability to simulate karst-like flow conditions in conjunction with porous-media-type matrix flow. However, the MODFLOW-DCM conduit model has had limited application and testing and currently (2008) lacks commercially available pre- and post-processors. The MODFLOW conduit-flow and diffuse-flow Edwards aquifer models are limited by the lack of calibration for the northern part of the Barton Springs segment (Travis County) and their reliance on the use of the calibrated hydraulic conductivity and storativity values from the calibrated Barton Springs segment GAM model. The major limitation of the Barton Springs segment GAM and recalibrated GAM models is that they were calibrated to match measured water levels and springflows for a restrictive range of hydrologic conditions, with each model having different hydraulic conductivity and storativity values appropriate to the hydrologic conditions that were simulated. The need for two different sets of hydraulic conductivity and storativity values increases the uncertainty associated with the accuracy of either set of values, illustrates the non-uniqueness of the model solution, and probably most importantly demonstrates the limitations of using a one-layer model to represent the heterogeneous hydrostratigraph

  2. A Q-Ising model application for linear-time image segmentation

    NASA Astrophysics Data System (ADS)

    Bentrem, Frank W.

    2010-10-01

    A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.

  3. Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model

    NASA Astrophysics Data System (ADS)

    Lee, Myungeun; Kim, Jong Hyo

    2012-02-01

    Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.

  4. Auto-segmentation of normal and target structures in head and neck CT images: a feature-driven model-based approach.

    PubMed

    Qazi, Arish A; Pekar, Vladimir; Kim, John; Xie, Jason; Breen, Stephen L; Jaffray, David A

    2011-11-01

    Intensity modulated radiation therapy (IMRT) allows greater control over dose distribution, which leads to a decrease in radiation related toxicity. IMRT, however, requires precise and accurate delineation of the organs at risk and target volumes. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. State of the art auto-segmentation methods are either atlas-based, model-based or hybrid however, robust fully automated segmentation is often difficult due to the insufficient discriminative information provided by standard medical imaging modalities for certain tissue types. In this paper, the authors present a fully automated hybrid approach which combines deformable registration with the model-based approach to accurately segment normal and target tissues from head and neck CT images. The segmentation process starts by using an average atlas to reliably identify salient landmarks in the patient image. The relationship between these landmarks and the reference dataset serves to guide a deformable registration algorithm, which allows for a close initialization of a set of organ-specific deformable models in the patient image, ensuring their robust adaptation to the boundaries of the structures. Finally, the models are automatically fine adjusted by our boundary refinement approach which attempts to model the uncertainty in model adaptation using a probabilistic mask. This uncertainty is subsequently resolved by voxel classification based on local low-level organ-specific features. To quantitatively evaluate the method, they auto-segment several organs at risk and target tissues from 10 head and neck CT images. They compare the segmentations to the manual delineations outlined by the expert. The evaluation is carried out by estimating two common quantitative measures on 10 datasets: volume overlap fraction or the Dice similarity coefficient (DSC), and a geometrical metric, the median symmetric Hausdorff distance (HD), which is evaluated slice-wise. They achieve an average overlap of 93% for the mandible, 91% for the brainstem, 83% for the parotids, 83% for the submandibular glands, and 74% for the lymph node levels. Our automated segmentation framework is able to segment anatomy in the head and neck region with high accuracy within a clinically-acceptable segmentation time.

  5. [Development of the soft independent modelling of class analogies model to discrimination Vibrio parahemolyticus by Smartongue].

    PubMed

    Huang, Jianfeng; Zhao, Guangying; Dou, Wenchao

    2011-04-01

    To explore a new rapid detection method for detecting of Food pathogens. We used the Smartongue, to determine the composition informations of the liquid culture samples and combined with soft independent modelling of class analogies (SIMCA) to analyze their respective species, then set up a Smartongue -SIMCA model to discriminate the V. parahaemolyticus. The Smartongue has 6 working electrodes and three frequency segments, we can built 18 discrimination models in one detection. After comparing all the 18 discrimination models, the optimal working electrodes and frequency segments were selected out, they were: palladium electrode in 1 Hz frequency segment, tungsten electrode in 100 Hz and silver electrode in 100 Hz. Then 10 species of pathogenic Vibrio were discriminated by the 3 models. The V. damsela, V. metschnikovii, V. alginalyticus, V. cincinnatiensis, V. metschnikovii and V. cholerae O serogroup samples could be discriminated by the SIMCA model of V. parahaemolyticus with palladium electrode 1 Hz frequency segment; V. mimicus and V. vulnincus samples could be discriminated by the SIMCA model of V. parahaemolyticus with tungsten electrode 100 Hz frequency segment; V. carcariae and V. cholerae non-O serogroup samples could be discriminated with the SIMCA model of V. parahaemolyticus in silver electrode 100 Hz frequency segment. The accurate discrimination of ten species of Vibrio samples is 100%. The Smartongue combined with SIMCA can discriminate V. parahaemolyticus with other pathogenic Vibrio effectively. It has a promising future as a new rapid detection method for V. parahaemolyticus.

  6. Kinematic Patterns Associated with the Vertical Force Produced during the Eggbeater Kick.

    PubMed

    Oliveira, Nuno; Chiu, Chuang-Yuan; Sanders, Ross H

    2015-01-01

    The purpose of this study was to determine the kinematic patterns that maximized the vertical force produced during the water polo eggbeater kick. Twelve water polo players were tested executing the eggbeater kick with the trunk aligned vertically and with the upper limbs above water while trying to maintain as high a position as possible out of the water for nine eggbeater kick cycles. Lower limb joint angular kinematics, pitch angles and speed of the feet were calculated. The vertical force produced during the eggbeater kick cycle was calculated using inverse dynamics for the independent lower body segments and combined upper body segments, and a participant-specific second-degree regression equation for the weight and buoyancy contributions. Vertical force normalized to body weight was associated with hip flexion (average, r = 0.691; maximum, r = 0.791; range of motion, r = 0.710), hip abduction (maximum, r = 0.654), knee flexion (average, r = 0.716; minimum, r = 0.653) and knee flexion-extension angular velocity (r = 0.758). Effective orientation of the hips resulted in fast horizontal motion of the feet with positive pitch angles. Vertical motion of the feet was negatively associated with vertical force. A multiple regression model comprising the non-collinear variables of maximum hip abduction, hip flexion range of motion and knee flexion angular velocity accounted for 81% of the variance in normalized vertical force. For high performance in the water polo, eggbeater kick players should execute fast horizontal motion with the feet by having large abduction and flexion of the hips, and fast extension and flexion of the knees.

  7. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  8. Dynamic deformable models for 3D MRI heart segmentation

    NASA Astrophysics Data System (ADS)

    Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.

    2002-05-01

    Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.

  9. Segmented media and medium damping in microwave assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Bai, Xiaoyu; Zhu, Jian-Gang

    2018-05-01

    In this paper, we present a methodology of segmented media stack design for microwave assisted magnetic recording. Through micro-magnetic modeling, it is demonstrated that an optimized media segmentation is able to yield high signal-to-noise ratio even with limited ac field power. With proper segmentation, the ac field power could be utilized more efficiently and this can alleviate the requirement for medium damping which has been previously considered a critical limitation. The micro-magnetic modeling also shows that with segmentation optimization, recording signal-to-noise ratio can have very little dependence on damping for different recording linear densities.

  10. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  11. Lung lobe modeling and segmentation with individualized surface meshes

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael

    2008-03-01

    An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.

  12. Association between MRI structural features and cognitive measures in pediatric multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Amoroso, N.; Bellotti, R.; Fanizzi, A.; Lombardi, A.; Monaco, A.; Liguori, M.; Margari, L.; Simone, M.; Viterbo, R. G.; Tangaro, S.

    2017-09-01

    Multiple sclerosis (MS) is an inflammatory and demyelinating disease associated with neurodegenerative processes that lead to brain structural changes. The disease affects mostly young adults, but 3-5% of cases has a pediatric onset (POMS). Magnetic Resonance Imaging (MRI) is generally used for diagnosis and follow-up in MS patients, however the most common MRI measures (e.g. new or enlarging T2-weighted lesions, T1-weighted gadolinium- enhancing lesions) have often failed as surrogate markers of MS disability and progression. MS is clinically heterogenous with symptoms that can include both physical changes (such as visual loss or walking difficulties) and cognitive impairment. 30-50% of POMS experience prominent cognitive dysfunction. In order to investigate the association between cognitive measures and brain morphometry, in this work we present a fully automated pipeline for processing and analyzing MRI brain scans. Relevant anatomical structures are segmented with FreeSurfer; besides, statistical features are computed. Thus, we describe the data referred to 12 patients with early POMS (mean age at MRI: 15.5 +/- 2.7 years) with a set of 181 structural features. The major cognitive abilities measured are verbal and visuo-spatial learning, expressive language and complex attention. Data was collected at the Department of Basic Sciences, Neurosciences and Sense Organs, University of Bari, and exploring different abilities like the verbal and visuo-spatial learning, expressive language and complex attention. Different regression models and parameter configurations are explored to assess the robustness of the results, in particular Generalized Linear Models, Bayes Regression, Random Forests, Support Vector Regression and Artificial Neural Networks are discussed.

  13. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images

    PubMed Central

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L.

    2018-01-01

    Purpose To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. Methods An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Results Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Conclusions Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Translational Relevance Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD. PMID:29302382

  14. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images.

    PubMed

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L

    2018-01-01

    To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.

  15. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation.

    PubMed

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception.

  16. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    PubMed Central

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  17. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  18. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    NASA Astrophysics Data System (ADS)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  19. How African American English-Speaking First Graders Segment and Rhyme Words and Nonwords With Final Consonant Clusters.

    PubMed

    Shollenbarger, Amy J; Robinson, Gregory C; Taran, Valentina; Choi, Seo-Eun

    2017-10-05

    This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords. Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model. The AAE group had significantly more responses that rhymed CVCC words with consonant-vowel-consonant words and segmented CVCC words as consonant-vowel-consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model. Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.

  20. Multifractal texture estimation for detection and segmentation of brain tumors.

    PubMed

    Islam, Atiq; Reza, Syed M S; Iftekharuddin, Khan M

    2013-11-01

    A stochastic model for characterizing tumor texture in brain magnetic resonance (MR) images is proposed. The efficacy of the model is demonstrated in patient-independent brain tumor texture feature extraction and tumor segmentation in magnetic resonance images (MRIs). Due to complex appearance in MRI, brain tumor texture is formulated using a multiresolution-fractal model known as multifractional Brownian motion (mBm). Detailed mathematical derivation for mBm model and corresponding novel algorithm to extract spatially varying multifractal features are proposed. A multifractal feature-based brain tumor segmentation method is developed next. To evaluate efficacy, tumor segmentation performance using proposed multifractal feature is compared with that using Gabor-like multiscale texton feature. Furthermore, novel patient-independent tumor segmentation scheme is proposed by extending the well-known AdaBoost algorithm. The modification of AdaBoost algorithm involves assigning weights to component classifiers based on their ability to classify difficult samples and confidence in such classification. Experimental results for 14 patients with over 300 MRIs show the efficacy of the proposed technique in automatic segmentation of tumors in brain MRIs. Finally, comparison with other state-of-the art brain tumor segmentation works with publicly available low-grade glioma BRATS2012 dataset show that our segmentation results are more consistent and on the average outperforms these methods for the patients where ground truth is made available.

  1. Multifractal Texture Estimation for Detection and Segmentation of Brain Tumors

    PubMed Central

    Islam, Atiq; Reza, Syed M. S.

    2016-01-01

    A stochastic model for characterizing tumor texture in brain magnetic resonance (MR) images is proposed. The efficacy of the model is demonstrated in patient-independent brain tumor texture feature extraction and tumor segmentation in magnetic resonance images (MRIs). Due to complex appearance in MRI, brain tumor texture is formulated using a multiresolution-fractal model known as multifractional Brownian motion (mBm). Detailed mathematical derivation for mBm model and corresponding novel algorithm to extract spatially varying multifractal features are proposed. A multifractal feature-based brain tumor segmentation method is developed next. To evaluate efficacy, tumor segmentation performance using proposed multifractal feature is compared with that using Gabor-like multiscale texton feature. Furthermore, novel patient-independent tumor segmentation scheme is proposed by extending the well-known AdaBoost algorithm. The modification of AdaBoost algorithm involves assigning weights to component classifiers based on their ability to classify difficult samples and confidence in such classification. Experimental results for 14 patients with over 300 MRIs show the efficacy of the proposed technique in automatic segmentation of tumors in brain MRIs. Finally, comparison with other state-of-the art brain tumor segmentation works with publicly available low-grade glioma BRATS2012 dataset show that our segmentation results are more consistent and on the average outperforms these methods for the patients where ground truth is made available. PMID:23807424

  2. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Employee choice of a high-deductible health plan across multiple employers.

    PubMed

    Lave, Judith R; Men, Aiju; Day, Brian T; Wang, Wei; Zhang, Yuting

    2011-02-01

    To determine factors associated with selecting a high-deductible health plan (HDHP) rather than a preferred provider plan (PPO) and to examine switching and market segmentation after initial selection. Claims and benefit information for 2005-2007 from nine employers in western Pennsylvania first offering HDHP in 2006. We examined plan growth over time, used logistic regression to determine factors associated with choosing an HDHP, and examined the distribution of healthy and sick members across plan types. We linked employees with their dependents to determine family-level variables. We extracted risk scores, covered charges, employee age, and employee gender from claims data. We determined census-level race, education, and income information. Health status, gender, race, and education influenced the type of individual and family policies chosen. In the second year the HDHP was offered, few employees changed plans. Risk segmentation between HDHPs and PPOs existed, but it did not increase. When given a choice, those who are healthier are more likely to select an HDHP leading to risk segmentation. Risk segmentation did not increase in the second year that HDHPs were offered. © Health Research and Educational Trust.

  4. Drawing the line between constituent structure and coherence relations in visual narratives

    PubMed Central

    Cohn, Neil; Bender, Patrick

    2016-01-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of Visual Narrative Grammar posits that hierarchic “grammatical” structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a “segmentation task” where participants drew lines between images in order to divide them into sub-episodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants’ divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. PMID:27709982

  5. Drawing the line between constituent structure and coherence relations in visual narratives.

    PubMed

    Cohn, Neil; Bender, Patrick

    2017-02-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Small should be the New Big: High-resolution Models with Small Segments have Big Advantages when Modeling Eutrophication in the Great Lakes

    EPA Science Inventory

    Historical mathematical models, especially Great Lakes eutrophication models, traditionally used course segmentation schemes and relatively simple hydrodynamics to represent system behavior. Although many modelers have claimed success using such models, these representations can ...

  7. Dynamic thermal characteristics of heat pipe via segmented thermal resistance model for electric vehicle battery cooling

    NASA Astrophysics Data System (ADS)

    Liu, Feifei; Lan, Fengchong; Chen, Jiqing

    2016-07-01

    Heat pipe cooling for battery thermal management systems (BTMSs) in electric vehicles (EVs) is growing due to its advantages of high cooling efficiency, compact structure and flexible geometry. Considering the transient conduction, phase change and uncertain thermal conditions in a heat pipe, it is challenging to obtain the dynamic thermal characteristics accurately in such complex heat and mass transfer process. In this paper, a ;segmented; thermal resistance model of a heat pipe is proposed based on thermal circuit method. The equivalent conductivities of different segments, viz. the evaporator and condenser of pipe, are used to determine their own thermal parameters and conditions integrated into the thermal model of battery for a complete three-dimensional (3D) computational fluid dynamics (CFD) simulation. The proposed ;segmented; model shows more precise than the ;non-segmented; model by the comparison of simulated and experimental temperature distribution and variation of an ultra-thin micro heat pipe (UMHP) battery pack, and has less calculation error to obtain dynamic thermal behavior for exact thermal design, management and control of heat pipe BTMSs. Using the ;segmented; model, the cooling effect of the UMHP pack with different natural/forced convection and arrangements is predicted, and the results correspond well to the tests.

  8. Estimation of stature from radiologic anthropometry of the lumbar vertebral dimensions in Chinese.

    PubMed

    Zhang, Kui; Chang, Yun-feng; Fan, Fei; Deng, Zhen-hua

    2015-11-01

    The recent study was to assess the relationship between the radiologic anthropometry of the lumbar vertebral dimensions and stature in Chinese and to develop regression formulae to estimate stature from these dimensions. A total of 412 normal, healthy volunteers, comprising 206 males and 206 females, were recruited. The linear regression analysis were performed to assess the correlation between the stature and lengths of various segments of the lumbar vertebral column. Among the regression equations created for single variable, the predictive value was greatest for the reconstruction of stature from the lumbar segment in both sexes and subgroup analysis. When individual vertebral body was used, the heights of posterior vertebral body of L3 gave the most accurate results for male group, the heights of central vertebral body of L1 provided the most accurate results for female group and female group with age above 45 years, the heights of central vertebral body of L3 gave the most accurate results for the groups with age from 20-45 years for both sexes and the male group with age above 45 years. The heights of anterior vertebral body of L5 gave the less accurate results except for the heights of anterior vertebral body of L4 provided the less accurate result for the male group with age above 45 years. As expected, multiple regression equations were more successful than equations derived from a single variable. The research observations suggest lumbar vertebral dimensions to be useful in stature estimation among Chinese population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Detection and visualization of endoleaks in CT data for monitoring of thoracic and abdominal aortic aneurysm stents

    NASA Astrophysics Data System (ADS)

    Lu, J.; Egger, J.; Wimmer, A.; Großkopf, S.; Freisleben, B.

    2008-03-01

    In this paper we present an efficient algorithm for the segmentation of the inner and outer boundary of thoratic and abdominal aortic aneurysms (TAA & AAA) in computed tomography angiography (CTA) acquisitions. The aneurysm segmentation includes two steps: first, the inner boundary is segmented based on a grey level model with two thresholds; then, an adapted active contour model approach is applied to the more complicated outer boundary segmentation, with its initialization based on the available inner boundary segmentation. An opacity image, which aims at enhancing important features while reducing spurious structures, is calculated from the CTA images and employed to guide the deformation of the model. In addition, the active contour model is extended by a constraint force that prevents intersections of the inner and outer boundary and keeps the outer boundary at a distance, given by the thrombus thickness, to the inner boundary. Based upon the segmentation results, we can measure the aneurysm size at each centerline point on the centerline orthogonal multiplanar reformatting (MPR) plane. Furthermore, a 3D TAA or AAA model is reconstructed from the set of segmented contours, and the presence of endoleaks is detected and highlighted. The implemented method has been evaluated on nine clinical CTA data sets with variations in anatomy and location of the pathology and has shown promising results.

  10. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  11. Dynamical simulation of E-ELT segmented primary mirror

    NASA Astrophysics Data System (ADS)

    Sedghi, B.; Muller, M.; Bauvir, B.

    2011-09-01

    The dynamical behavior of the primary mirror (M1) has an important impact on the control of the segments and the performance of the telescope. Control of large segmented mirrors with a large number of actuators and sensors and multiple control loops in real life is a challenging problem. In virtual life, modeling, simulation and analysis of the M1 bears similar difficulties and challenges. In order to capture the dynamics of the segment subunits (high frequency modes) and the telescope back structure (low frequency modes), high order dynamical models with a very large number of inputs and outputs need to be simulated. In this paper, different approaches for dynamical modeling and simulation of the M1 segmented mirror subject to various perturbations, e.g. sensor noise, wind load, vibrations, earthquake are presented.

  12. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  13. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  14. Importance of fishing as a segmentation variable in the application of a social worlds model

    USGS Publications Warehouse

    Gigliotti, Larry M.; Chase, Loren

    2017-01-01

    Market segmentation is useful to understanding and classifying the diverse range of outdoor recreation experiences sought by different recreationists. Although many different segmentation methodologies exist, many are complex and difficult to measure accurately during in-person intercepts, such as that of creel surveys. To address that gap in the literature, we propose a single-item measure of the importance of fishing as a surrogate to often overly- or needlesslycomplex segmentation techniques. The importance of fishing item is a measure of the value anglers place on the activity or a coarse quantification of how central the activity is to the respondent’s lifestyle (scale: 0 = not important, 1 = slightly, 2 = moderately, 3 = very, and 4 = fishing is my most important recreational activity). We suggest the importance scale may be a proxy measurement for segmenting anglers using the social worlds model as a theoretical framework. Vaske (1980) suggested that commitment to recreational activities may be best understood in relation to social group participation and the social worlds model provides a rich theoretical framework for understanding social group segments. Unruh (1983) identified four types of actor involvement in social worlds: strangers, tourists, regulars, and insiders, differentiated by four characteristics (orientation, experiences, relationships, and commitment). We evaluated the importance of fishing as a segmentation variable using data collected by a mixed-mode survey of South Dakota anglers fishing in 2010. We contend that this straightforward measurement may be useful for segmenting outdoor recreation activities when more complicated segmentation schemes are not suitable. Further, this index, when coupled with the social worlds model, provides a valuable framework for understanding the segments and making management decisions.

  15. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  16. Breaststroke swimmers moderate internal work increases toward the highest stroke frequencies.

    PubMed

    Lauer, Jessy; Olstad, Bjørn Harald; Minetti, Alberto Enrico; Kjendlie, Per-Ludvik; Rouard, Annie Hélène

    2015-09-18

    A model to predict the mechanical internal work of breaststroke swimming was designed. It allowed us to explore the frequency-internal work relationship in aquatic locomotion. Its accuracy was checked against internal work values calculated from kinematic sequences of eight participants swimming at three different self-chosen paces. Model predictions closely matched experimental data (0.58 ± 0.07 vs 0.59 ± 0.05 J kg(-1)m(-1); t(23)=-0.30, P=0.77), which was reflected in a slope of the major axis regression between measured and predicted total internal work whose 95% confidence intervals included the value of 1 (β=0.84, [0.61, 1.07], N=24). The model shed light on swimmers ability to moderate the increase in internal work at high stroke frequencies. This strategy of energy minimization has never been observed before in humans, but is present in quadrupedal and octopedal animal locomotion. This was achieved through a reduced angular excursion of the heaviest segments (7.2 ± 2.9° and 3.6 ± 1.5° for the thighs and trunk, respectively, P<0.05) in favor of the lightest ones (8.8 ± 2.3° and 7.4 ± 1.0° for the shanks and forearms, respectively, P<0.05). A deeper understanding of the energy flow between the body segments and the environment is required to ascertain the possible dependency between internal and external work. This will prove essential to better understand swimming mechanical cost determinants and power generation in aquatic movements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Determination of glucose in a biological matrix by multivariate analysis of multiple band-pass-filtered Fourier transform near-infrared interferograms.

    PubMed

    Mattu, M J; Small, G W; Arnold, M A

    1997-11-15

    A multivariate calibration method is described in which Fourier transform near-infrared interferogram data are used to determine clinically relevant levels of glucose in an aqueous matrix of bovine serum albumin (BSA) and triacetin. BSA and triacetin are used to model the protein and triglycerides in blood, respectively, and are present in levels spanning the normal human physiological range. A full factorial experimental design is constructed for the data collection, with glucose at 10 levels, BSA at 4 levels, and triacetin at 4 levels. Gaussian-shaped band-pass digital filters are applied to the interferogram data to extract frequencies associated with an absorption band of interest. Separate filters of various widths are positioned on the glucose band at 4400 cm-1, the BSA band at 4606 cm-1, and the triacetin band at 4446 cm-1. Each filter is applied to the raw interferogram, producing one, two, or three filtered interferograms, depending on the number of filters used. Segments of these filtered interferograms are used together in a partial least-squares regression analysis to build glucose calibration models. The optimal calibration model is realized by use of separate segments of interferograms filtered with three filters centered on the glucose, BSA, and triacetin bands. Over the physiological range of 1-20 mM glucose, this 17-term model exhibits values of R2, standard error of calibration, and standard error of prediction of 98.85%, 0.631 mM, and 0.677 mM, respectively. These results are comparable to those obtained in a conventional analysis of spectral data. The interferogram-based method operates without the use of a separate background measurement and employs only a short section of the interferogram.

  18. Optimal Body Size and Limb Length Ratios Associated with 100-m Personal-Best Swim Speeds.

    PubMed

    Nevill, Alan M; Oxford, Samuel W; Duncan, Michael J

    2015-08-01

    This study aims to identify optimal body size and limb segment length ratios associated with 100-m personal-best (PB) swim speeds in children and adolescents. Fifty national-standard youth swimmers (21 males and 29 females age 11-16 yr; mean ± SD age, 13.5 ± 1.5 yr) participated in the study. Anthropometry comprised stature; body mass; skinfolds; maturity offset; upper arm, lower arm, and hand lengths; and upper leg, lower leg, and foot lengths. Swimming performance was taken as the PB time recorded in competition for the 100-m freestyle swim. To identify the optimal body size and body composition components associated with 100-m PB swim speeds (having controlled for age and maturity offset), we adopted a multiplicative allometric log-linear regression model, which was refined using backward elimination. Lean body mass was the singularly most important whole-body characteristic. Stature and body mass did not contribute to the model, suggesting that the advantage of longer levers was limb-specific rather than a general whole-body advantage. The allometric model also identified that having greater limb segment length ratios [i.e., arm ratio = (low arm)/(upper arm); foot-to-leg ratio = (foot)/(lower leg)] was key to PB swim speeds. It is only by adopting multiplicative allometric models that the above mentioned ratios could have been derived. The advantage of having a greater lower arm is clear; however, having a shorter upper arm (achieved by adopting a closer elbow angle technique or by possessing a naturally endowed shorter upper arm), at the same time, is a new insight into swimming performance. A greater foot-to-lower-leg ratio suggests that a combination of larger feet and shorter lower leg length may also benefit PB swim speeds.

  19. Changes in river water temperature between 1980 and 2012 in Yongan watershed, eastern China: Magnitude, drivers and models

    NASA Astrophysics Data System (ADS)

    Chen, Dingjiang; Hu, Minpeng; Guo, Yi; Dahlgren, Randy A.

    2016-02-01

    Climate warming is expected to have major impacts on river water quality, water column/hyporheic zone biogeochemistry and aquatic ecosystems. A quantitative understanding of spatio-temporal air (Ta) and water (Tw) temperature dynamics is required to guide river management and to facilitate adaptations to climate change. This study determined the magnitude, drivers and models for increasing Tw in three river segments of the Yongan watershed in eastern China. Over the 1980-2012 period, Tw in the watershed increased by 0.029-0.046 °C yr-1 due to a ∼0.050 °C yr-1 increase of Ta and changes in local human activities (e.g., increasing developed land and population density and decreasing forest area). A standardized multiple regression model was developed for predicting annual Tw (R2 = 0.88-0.91) and identifying/partitioning the impact of the principal drivers on increasing Tw:Ta (76 ± 1%), local human activities (14 ± 2%), and water discharge (10 ± 1%). After normalizing water discharge, climate warming and local human activities were estimated to contribute 81-95% and 5-19% of the observed rising Tw, respectively. Models forecast a 0.32-1.76 °C increase in Tw by 2050 compared with the 2000-2012 baseline condition based on four future scenarios. Heterogeneity of warming rates existed across seasons and river segments, with the lower flow river and dry season demonstrating a more pronounced response to climate warming and human activities. Rising Tw due to changes in climate, local human activities and hydrology has a considerable potential to aggravate river water quality degradation and coastal water eutrophication in summer. Thus it should be carefully considered in developing watershed management strategies in response to climate change.

  20. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  1. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Modeling of market segmentation for new IT product development

    NASA Astrophysics Data System (ADS)

    Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda

    2015-02-01

    Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources. Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.

  3. Conceptual model of consumer’s willingness to eat functional foods

    PubMed

    Babicz-Zielinska, Ewa; Jezewska-Zychowicz, Maria

    The functional foods constitute the important segment of the food market. Among factors that determine the intentions to eat functional foods, the psychological factors play very important roles. Motives, attitudes and personality are key factors. The relationships between socio-demographic characteristics, attitudes and willingness to purchase functional foods were not fully confirmed. Consumers’ beliefs about health benefits from eaten foods seem to be a strong determinant of a choice of functional foods. The objective of this study was to determine relations between familiarity, attitudes, and beliefs in benefits and risks about functional foods and develop some conceptual models of willingness to eat. The sample of Polish consumers counted 1002 subjects at age 15+. The foods enriched with vitamins or minerals, and cholesterol-lowering margarine or drinks were considered. The questionnaire focused on familiarity with foods, attitudes, beliefs about benefits and risks of their consumption was constructed. The Pearson’s correlations and linear regression equations were calculated. The strongest relations appeared between attitudes, high health value and high benefits, (r = 0.722 and 0.712 for enriched foods, and 0.664 and 0.693 for cholesterol-lowering foods), and between high health value and high benefits (0.814 for enriched foods and 0.758 for cholesterol-lowering foods). The conceptual models based on linear regression of relations between attitudes and all other variables, considering or not the familiarity with the foods, were developed. The positive attitudes and declared consumption are more important for enriched foods. The beliefs on high health value and high benefits play the most important role in the purchase. The interrelations between different variables may be described by new linear regression models, with the beliefs in high benefits, positive attitudes and familiarity being most significant predictors. Health expectations and trust to functional foods are the key factors in their choice.

  4. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  5. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    NASA Astrophysics Data System (ADS)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  6. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  7. False Positive Stress Testing: Does Endothelial Vascular Dysfunction Contribute to ST-Segment Depression in Women? A Pilot Study.

    PubMed

    Sharma, Shilpa; Mehta, Puja K; Arsanjani, Reza; Sedlak, Tara; Hobel, Zachary; Shufelt, Chrisandra; Jones, Erika; Kligfield, Paul; Mortara, David; Laks, Michael; Diniz, Marcio; Bairey Merz, C Noel

    2018-06-19

    The utility of exercise-induced ST-segment depression for diagnosing ischemic heart disease (IHD) in women is unclear. Based on evidence that IHD pathophysiology in women involves coronary vascular dysfunction, we hypothesized that coronary vascular dysfunction contributes to exercise electrocardiography (Ex-ECG) ST-depression in the absence of obstructive CAD, so-called "false positive" results. We tested our hypothesis in a pilot study evaluating the relationship between peripheral vascular endothelial function and Ex-ECG. Twenty-nine asymptomatic women without cardiac risk factors underwent maximal Bruce protocol exercise treadmill testing and peripheral endothelial function assessment using peripheral arterial tonometry (Itamar EndoPAT 2000) to measure reactive hyperemia index (RHI). The relationship between RHI and Ex-ECG ST-segment depression was evaluated using logistic regression and differences in subgroups using two-tailed t-tests. Mean age was 54 ± 7 years, body mass index 25 ± 4 kg/m 2 , and RHI 2.51 ± 0.66. Three women (10%) had RHI less than 1.68, consistent with abnormal peripheral endothelial function, while 18 women (62%) met criteria for a positive Ex-ECG based on ST-segment depression in contiguous leads. Women with and without ST-segment depression had similar baseline and exercise vital signs, metabolic equivalents (METS) achieved, and RHI (all p>0.05). RHI did not predict ST-segment depression. Our pilot study demonstrates a high prevalence of exercise-induced ST-segment depression in asymptomatic, middle-aged, overweight women. Peripheral vascular endothelial dysfunction did not predict Ex-ECG ST-segment depression. Further work is needed to investigate the utility of vascular endothelial testing and Ex-ECG for IHD diagnostic and management purposes in women. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images.

    PubMed

    Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M

    2017-05-01

    To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.

  9. Dance Class Structure Affects Youth Physical Activity and Sedentary Behavior: A Study of Seven Dance Types.

    PubMed

    Lopez Castillo, Maria A; Carlson, Jordan A; Cain, Kelli L; Bonilla, Edith A; Chuang, Emmeline; Elder, John P; Sallis, James F

    2015-01-01

    The study aims were to determine: (a) how class structure varies by dance type, (b) how moderate-to-vigorous physical activity (MVPA) and sedentary behavior vary by dance class segments, and (c) how class structure relates to total MVPA in dance classes. Participants were 291 boys and girls ages 5 to 18 years old enrolled in 58 dance classes at 21 dance studios in Southern California. MVPA and sedentary behavior were assessed with accelerometry, with data aggregated to 15-s epochs. Percent and minutes of MVPA and sedentary behavior during dance class segments and percent of class time and minutes spent in each segment were calculated using Freedson age-specific cut points. Differences in MVPA (Freedson 3 Metabolic Equivalents of Tasks age-specific cut points) and sedentary behavior ( < 100 counts/min) were examined using mixed-effects linear regression. The length of each class segment was fairly consistent across dance types, with the exception that in ballet, more time was spent in technique as compared with private jazz/hip-hop classes and Latin-flamenco and less time was spent in routine/practice as compared with Latin-salsa/ballet folklorico. Segment type accounted for 17% of the variance in the proportion of the segment spent in MVPA. The proportion of the segment in MVPA was higher for routine/practice (44.2%) than for technique (34.7%). The proportion of the segment in sedentary behavior was lowest for routine/practice (22.8%). The structure of dance lessons can impact youths' physical activity. Working with instructors to increase time in routine/practice during dance classes could contribute to physical activity promotion in youth.

  10. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  11. Improved assessment of body cell mass by segmental bioimpedance analysis in malnourished subjects and acromegaly.

    PubMed

    Pirlich, M; Schütz, T; Ockenga, J; Biering, H; Gerl, H; Schmidt, B; Ertl, S; Plauth, M; Lochs, H

    2003-04-01

    Estimation of body cell mass (BCM) has been regarded valuable for the assessment of malnutrition. To investigate the value of segmental bioelectrical impedance analysis (BIA) for BCM estimation in malnourished subjects and acromegaly. Nineteen controls and 63 patients with either reduced (liver cirrhosis without and with ascites, Cushing's disease) or increased BCM (acromegaly) were included. Whole-body and segmental BIA (separately measuring arm, trunk, leg) at 50 kHz was compared with BCM measured by total-body potassium. Multiple regression analysis was used to develop specific equations for BCM in each subgroup. Compared to whole-body BIA equations, the inclusion of arm resistance improved the specific equation in cirrhotic patients without ascites and in Cushing's disease resulting in excellent prediction of BCM (R(2) = 0.93 and 0.92, respectively; both P<0.001). In acromegaly, inclusion of resistance and reactance of the trunk best described BCM (R(2) = 0.94, P<0.001). In controls and in cirrhotic patients with ascites, segmental impedance parameters did not improve BCM prediction (best values obtained by whole-body measurements: R(2)=0.88 and 0.60; P<0.001 and <0.003, respectively). Segmental BIA improves the assessment of BCM in malnourished patients and acromegaly, but not in patients with severe fluid overload. Copyright 2003 Elsevier Science Ltd.

  12. Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.

    PubMed

    Spinczyk, Dominik; Krasoń, Agata

    2018-05-29

    Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.

  13. Rapid prediction of single green coffee bean moisture and lipid content by hyperspectral imaging.

    PubMed

    Caporaso, Nicola; Whitworth, Martin B; Grebby, Stephen; Fisk, Ian D

    2018-06-01

    Hyperspectral imaging (1000-2500 nm) was used for rapid prediction of moisture and total lipid content in intact green coffee beans on a single bean basis. Arabica and Robusta samples from several growing locations were scanned using a "push-broom" system. Hypercubes were segmented to select single beans, and average spectra were measured for each bean. Partial Least Squares regression was used to build quantitative prediction models on single beans (n = 320-350). The models exhibited good performance and acceptable prediction errors of ∼0.28% for moisture and ∼0.89% for lipids. This study represents the first time that HSI-based quantitative prediction models have been developed for coffee, and specifically green coffee beans. In addition, this is the first attempt to build such models using single intact coffee beans. The composition variability between beans was studied, and fat and moisture distribution were visualized within individual coffee beans. This rapid, non-destructive approach could have important applications for research laboratories, breeding programmes, and for rapid screening for industry.

  14. Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging.

    PubMed

    Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C

    2010-06-01

    We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  16. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  17. Myocardial Infarct Segmentation from Magnetic Resonance Images for Personalized Modeling of Cardiac Electrophysiology

    PubMed Central

    Ukwatta, Eranga; Arevalo, Hermenegild; Li, Kristina; Yuan, Jing; Qiu, Wu; Malamas, Peter; Wu, Katherine C.

    2016-01-01

    Accurate representation of myocardial infarct geometry is crucial to patient-specific computational modeling of the heart in ischemic cardiomyopathy. We have developed a methodology for segmentation of left ventricular (LV) infarct from clinically acquired, two-dimensional (2D), late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, for personalized modeling of ventricular electrophysiology. The infarct segmentation was expressed as a continuous min-cut optimization problem, which was solved using its dual formulation, the continuous max-flow (CMF). The optimization objective comprised of a smoothness term, and a data term that quantified the similarity between image intensity histograms of segmented regions and those of a set of training images. A manual segmentation of the LV myocardium was used to initialize and constrain the developed method. The three-dimensional geometry of infarct was reconstructed from its segmentation using an implicit, shape-based interpolation method. The proposed methodology was extensively evaluated using metrics based on geometry, and outcomes of individualized electrophysiological simulations of cardiac dys(function). Several existing LV infarct segmentation approaches were implemented, and compared with the proposed method. Our results demonstrated that the CMF method was more accurate than the existing approaches in reproducing expert manual LV infarct segmentations, and in electrophysiological simulations. The infarct segmentation method we have developed and comprehensively evaluated in this study constitutes an important step in advancing clinical applications of personalized simulations of cardiac electrophysiology. PMID:26731693

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rueegsegger, Michael B.; Bach Cuadra, Meritxell; Pica, Alessia

    Purpose: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. Methods and Materials: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3Dmore » statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. Results: Cross-validation revealed a dice similarity of 95% {+-} 2% for the sclera and cornea and 91% {+-} 2% for the lens. Overall, mean segmentation error was found to be 0.3 {+-} 0.1 mm. Average segmentation time was 14 {+-} 2 s on a standard personal computer. Conclusions: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.« less

  19. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.

  20. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  1. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  2. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  3. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  4. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  5. Space Shuttle Five-Segment Booster (Short Course)

    NASA Technical Reports Server (NTRS)

    Graves, Stanley R.; Rudolphi, Michael (Technical Monitor)

    2002-01-01

    NASA is considering upgrading the Space Shuttle by adding a fifth segment (FSB) to the current four-segment solid rocket booster. Course materials cover design and engineering issues related to the Reusable Solid Rocket Motor (RSRM) raised by the addition of a fifth segment to the rocket booster. Topics cover include: four segment vs. five segment booster, abort modes, FSB grain design, erosive burning, enhanced propellant burn rate, FSB erosive burning model development and hardware configuration.

  6. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  7. Hippocampus segmentation using locally weighted prior based level set

    NASA Astrophysics Data System (ADS)

    Achuthan, Anusha; Rajeswari, Mandava

    2015-12-01

    Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.

  8. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  9. Infolding of fenestrated endovascular stent graft.

    PubMed

    Zelt, Jason G E; Jetty, Prasad; Hadziomerovic, Adnan; Nagpal, Sudhir

    2017-09-01

    We report a case of infolding of a fenestrated stent graft involving the visceral vessel segment after a juxtarenal abdominal aorta aneurysm repair. The patient remains free of any significant endoleak, and the aortic sac has shown regression. The patient remains asymptomatic, with no abdominal pain, with normal renal function, and without ischemic limb complications. We hypothesize that significant graft oversizing (20%-30%) with asymmetric engineering of the diameter-reducing ties may have contributed to the infolding. Because of the patient's asymptomatic nature and general medical comorbidities, further intervention was deemed inappropriate as the aneurysmal sac is regressing despite the infolding.

  10. A completely automated processing pipeline for lung and lung lobe segmentation and its application to the LIDC-IDRI data base

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta

    2010-03-01

    Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.

  11. Segmenting lung fields in serial chest radiographs using both population-based and patient-specific shape statistics.

    PubMed

    Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D

    2008-04-01

    This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.

  12. Simulation of nutrient and sediment concentrations and loads in the Delaware inland bays watershed: Extension of the hydrologic and water-quality model to ungaged segments

    USGS Publications Warehouse

    Gutierrez-Magness, Angelica L.

    2006-01-01

    Rapid population increases, agriculture, and industrial practices have been identified as important sources of excessive nutrients and sediments in the Delaware Inland Bays watershed. The amount and effect of excessive nutrients and sediments in the Inland Bays watershed have been well documented by the Delaware Geological Survey, the Delaware Department of Natural Resources and Environmental Control, the U.S. Environmental Protection Agency's National Estuary Program, the Delaware Center for Inland Bays, the University of Delaware, and other agencies. This documentation and data previously were used to develop a hydrologic and water-quality model of the Delaware Inland Bays watershed to simulate nutrients and sediment concentrations and loads, and to calibrate the model by comparing concentrations and streamflow data at six stations in the watershed over a limited period of time (October 1998 through April 2000). Although the model predictions of nutrient and sediment concentrations for the calibrated segments were fairly accurate, the predictions for the 28 ungaged segments located near tidal areas, where stream data were not available, were above the range of values measured in the area. The cooperative study established in 2000 by the Delaware Department of Natural Resources and Environmental Control, the Delaware Geological Survey, and the U.S. Geological Survey was extended to evaluate the model predictions in ungaged segments and to ensure that the model, developed as a planning and management tool, could accurately predict nutrient and sediment concentrations within the measured range of values in the area. The evaluation of the predictions was limited to the period of calibration (1999) of the 2003 model. To develop estimates on ungaged watersheds, parameter values from calibrated segments are transferred to the ungaged segments; however, accurate predictions are unlikely where parameter transference is subject to error. The unexpected nutrient and sediment concentrations simulated with the 2003 model were likely the result of inappropriate criteria for the transference of parameter values. From a model-simulation perspective, it is a common practice to transfer parameter values based on the similarity of soils or the similarity of land-use proportions between segments. For the Inland Bays model, the similarity of soils between segments was used as the basis to transfer parameter values. An alternative approach, which is documented in this report, is based on the similarity of the spatial distribution of the land use between segments and the similarity of land-use proportions, as these can be important factors for the transference of parameter values in lumped models. Previous work determined that the difference in the variation of runoff due to various spatial distributions of land use within a watershed can cause substantialloss of accuracy in the model predictions. The incorporation of the spatial distribution of land use to transfer parameter values from calibrated to uncalibrated segments provided more consistent and rational predictions of flow, especially during the summer, and consequently, predictions of lower nutrient concentrations during the same period. For the segments where the similarity of spatial distribution of land use was not clearly established with a calibrated segment, the similarity of the location of the most impervious areas was also used as a criterion for the transference of parameter values. The model predictions from the 28 ungaged segments were verified through comparison with measured in-stream concentrations from local and nearby streams provided by the Delaware Department of Natural Resources and Environmental Control. Model results indicated that the predicted edge-of-stream total suspended solids loads in the Inland Bays watershed were low in comparison to loads reported for the Eastern Shore of Maryland from the Chesapeake Bay watershed model. The flatness of the ter

  13. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  14. SU-C-207-07: Quantification of Coronary Artery Cross-Sectional Area in CT Angiography Using Integrated Density: A Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, T; Ding, H; Torabzadeh, M

    2015-06-15

    Purpose: To investigate the feasibility of quantifying the cross-sectional area (CSA) of coronary arteries using integrated density in a physics-based model with a phantom study. Methods: In this technique the total integrated density of the object as compared with its local background is measured so it is possible to account for the partial volume effect. The proposed method was compared to manual segmentation using CT scans of a 10 cm diameter Lucite cylinder placed inside a chest phantom. Holes with cross-sectional areas from 1.4 to 12.3 mm{sup 2} were drilled into the Lucite and filled with iodine solution, producing amore » contrast-to-noise ratio of approximately 26. Lucite rods 1.6 mm in diameter were used to simulate plaques. The phantom was imaged with and without the Lucite rods placed in the holes to simulate diseased and normal arteries, respectively. Linear regression analysis was used, and the root-mean-square deviations (RMSD) and errors (RMSE) were computed to assess the precision and accuracy of the measurements. In the case of manual segmentation, two readers independently delineated the lumen in order to quantify the inter-reader variability. Results: The precision and accuracy for the normal vessels using the integrated density technique were 0.32 mm{sup 2} and 0.32 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.51 mm{sup 2} and 0.56 mm{sup 2}. In the case of diseased vessels, the precision and accuracy of the integrated density technique were 0.46 mm{sup 2} and 0.55 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.75 mm{sup 2} and 0.98 mm{sup 2}. The mean percent difference for the two readers was found to be 8.4%. Conclusion: The CSA based on integrated density had improved precision and accuracy as compared with manual segmentation in a Lucite phantom. The results indicate the potential for using integrated density to improve CSA measurements in CT angiography.« less

  15. Figure-Ground Segmentation Using Factor Graphs

    PubMed Central

    Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr

    2009-01-01

    Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994

  16. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features.

    PubMed

    Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin

    2008-11-01

    We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.

  17. Market Segmentation from a Behavioral Perspective

    ERIC Educational Resources Information Center

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  18. Impact of freeway weaving segment design on light-duty vehicle exhaust emissions.

    PubMed

    Li, Qing; Qiao, Fengxiang; Yu, Lei; Chen, Shuyan; Li, Tiezhu

    2018-06-01

    In the United States, 26% of greenhouse gas emissions is emitted from the transportation sector; these emisssions meanwhile are accompanied by enormous toxic emissions to humans, such as carbon monoxide (CO), nitrogen oxides (NO x ), and hydrocarbon (HC), approximately 2.5% and 2.44% of a total exhaust emissions for a petrol and a diesel engine, respectively. These exhaust emissions are typically subject to vehicles' intermittent operations, such as hard acceleration and hard braking. In practice, drivers are inclined to operate intermittently while driving through a weaving segment, due to complex vehicle maneuvering for weaving. As a result, the exhaust emissions within a weaving segment ought to vary from those on a basic segment. However, existing emission models usually rely on vehicle operation information, and compute a generalized emission result, regardless of road configuration. This research proposes to explore the impacts of weaving segment configuration on vehicle emissions, identify important predictors for emission estimations, and develop a nonlinear normalized emission factor (NEF) model for weaving segments. An on-board emission test was conducted on 12 subjects on State Highway 288 in Houston, Texas. Vehicles' activity information, road conditions, and real-time exhaust emissions were collected by on-board diagnosis (OBD), a smartphone-based roughness app, and a portable emission measurement system (PEMS), respectively. Five feature selection algorithms were used to identify the important predictors for the response of NEF and the modeling algorithm. The predictive power of four algorithm-based emission models was tested by 10-fold cross-validation. Results showed that emissions are also susceptible to the type and length of a weaving segment. Bagged decision tree algorithm was chosen to develop a 50-grown-tree NEF model, which provided a validation error of 0.0051. The estimated NEFs are highly correlated with the observed NEFs in the training data set as well as in the validation data set, with the R values of 0.91 and 0.90, respectively. Existing emission models usually rely on vehicle operation information to compute a generalized emission result, regardless of road configuration. In practice, while driving through a weaving segment, drivers are inclined to perform erratic maneuvers, such as hard braking and hard acceleration due to the complex weaving maneuver required. As a result, the exhaust emissions within a weaving segment vary from those on a basic segment. This research proposes to involve road configuration, in terms of the type and length of a weaving segment, in constructing an emission nonlinear model, which significantly improves emission estimations at a microscopic level.

  19. A mathematical analysis to address the 6 degree-of-freedom segmental power imbalance.

    PubMed

    Ebrahimi, Anahid; Collins, John D; Kepple, Thomas M; Takahashi, Kota Z; Higginson, Jill S; Stanhope, Steven J

    2018-01-03

    Segmental power is used in human movement analyses to indicate the source and net rate of energy transfer between the rigid bodies of biomechanical models. Segmental power calculations are performed using segment endpoint dynamics (kinetic method). A theoretically equivalent method is to measure the rate of change in a segment's mechanical energy state (kinematic method). However, these two methods have not produced experimentally equivalent results for segments proximal to the foot, with the difference in methods deemed the "power imbalance." In a 6 degree-of-freedom model, segments move independently, resulting in relative segment endpoint displacement and non-equivalent segment endpoint velocities at a joint. In the kinetic method, a segment's distal end translational velocity may be defined either at the anatomical end of the segment or at the location of the joint center (defined here as the proximal end of the adjacent distal segment). Our mathematical derivations revealed the power imbalance between the kinetic method using the anatomical definition and the kinematic method can be explained by power due to relative segment endpoint displacement. In this study, we tested this analytical prediction through experimental gait data from nine healthy subjects walking at a typical speed. The average absolute segmental power imbalance was reduced from 0.023 to 0.046 W/kg using the anatomical definition to ≤0.001 W/kg using the joint center definition in the kinetic method (95.56-98.39% reduction). Power due to relative segment endpoint displacement in segmental power analyses is substantial and should be considered in analyzing energetic flow into and between segments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Reliability of upper and lower extremity anthropometric measurements and the effect on tissue mass predictions.

    PubMed

    Burkhart, Timothy A; Arthurs, Katherine L; Andrews, David M

    2008-01-01

    Accurate modeling of soft tissue motion effects relative to bone during impact requires knowledge of the mass of soft and rigid tissues in living people. Holmes et al., [2005. Predicting in vivo soft tissue masses of the lower extremity using segment anthropometric measures and DXA. Journal of Applied Biomechanics, 21, 371-382] developed and validated regression equations to predict the individual tissue masses of lower extremity segments of young healthy adults, based on simple anthropometric measurements. However, the reliability of these measurements and the effect on predicted tissue mass estimates from the equations has yet to be determined. In the current study, two measurers were responsible for collecting two sets of unilateral measurements (25 male and 25 female subjects) for the right upper and lower extremities. These included 6 lengths, 6 circumferences, 8 breadths, and 4 skinfold thicknesses. Significant differences were found between measurers and between sexes, but these differences were relatively small in general (75-80% of between-measurer differences were <1cm). Within-measurer measurement differences were smaller and more consistent than those between measurers in most cases. Good to excellent reliability was demonstrated for all measurement types, with intra-class correlation coefficients of 0.79, 0.86, 0.85 and 0.86 for lengths, circumferences, breadth and skinfolds, respectively. Predicted tissue mass magnitudes were moderately affected by the measurement differences. The maximum mean errors between measurers ranged from 3.2% to 24.2% for bone mineral content and fat mass, for the leg and foot, and the leg segments, respectively.

  1. Wavelet energy-guided level set-based active contour: a segmentation method to segment highly similar regions.

    PubMed

    Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi

    2010-07-01

    This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Dynamic patterning by the Drosophila pair-rule network reconciles long-germ and short-germ segmentation

    PubMed Central

    2017-01-01

    Drosophila segmentation is a well-established paradigm for developmental pattern formation. However, the later stages of segment patterning, regulated by the “pair-rule” genes, are still not well understood at the system level. Building on established genetic interactions, I construct a logical model of the Drosophila pair-rule system that takes into account the demonstrated stage-specific architecture of the pair-rule gene network. Simulation of this model can accurately recapitulate the observed spatiotemporal expression of the pair-rule genes, but only when the system is provided with dynamic “gap” inputs. This result suggests that dynamic shifts of pair-rule stripes are essential for segment patterning in the trunk and provides a functional role for observed posterior-to-anterior gap domain shifts that occur during cellularisation. The model also suggests revised patterning mechanisms for the parasegment boundaries and explains the aetiology of the even-skipped null mutant phenotype. Strikingly, a slightly modified version of the model is able to pattern segments in either simultaneous or sequential modes, depending only on initial conditions. This suggests that fundamentally similar mechanisms may underlie segmentation in short-germ and long-germ arthropods. PMID:28953896

  3. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  4. The relationship between allometry and preferred transition speed in human locomotion.

    PubMed

    Ranisavljev, Igor; Ilic, Vladimir; Soldatovic, Ivan; Stefanovic, Djordje

    2014-04-01

    The purpose of this study was to explore the relationships between preferred transition speed (PTS) and anthropometric characteristics, body composition and different human body proportions in males. In a sample of 59 male students, we collected anthropometric and body composition data and determined individual PTS using increment protocol. The relationships between PTS and other variables were determined using Pearson correlation, stepwise linear and hierarchical regression. Body ratios were formed as quotient of two variables whereby at least one significantly correlated to PTS. Circular and transversal (except bitrochanteric diameter) body dimensions did not correlate with PTS. Moderate correlations were found between longitudinal leg dimensions (foot, leg and thigh length) and PTS, while the highest correlation was found for lower leg length (r=.488, p<.01). Two parameters related to body composition showed weak correlation with PTS: body fat mass (r=-.250, p<.05) and amount of lean leg mass scaled to body weight (r=.309, p<.05). Segmental body proportions correlated more significantly with PTS, where thigh/lower leg length ratio showed the highest correlation (r=.521, p<.01). Prediction model with individual variables (lower leg and foot length) have explained just 31% of PTS variability, while model with body proportions showed almost 20% better prediction (R(2)=.504). These results suggests that longitudinal leg dimensions have moderate influence on PTS and that segmental body proportions significantly more explain PTS than single anthropometric variables. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Holocene paleoseismicity, temporal clustering, and probabilities of future large (M > 7) earthquakes on the Wasatch fault zone, Utah

    USGS Publications Warehouse

    McCalpin, J.P.; Nishenko, S.P.

    1996-01-01

    The chronology of M>7 paleoearthquakes on the central five segments of the Wasatch fault zone (WFZ) is one of the best dated in the world and contains 16 earthquakes in the past 5600 years with an average repeat time of 350 years. Repeat times for individual segments vary by a factor of 2, and range from about 1200 to 2600 years. Four of the central five segments ruptured between ??? 620??30 and 1230??60 calendar years B.P. The remaining segment (Brigham City segment) has not ruptured in the past 2120??100 years. Comparison of the WFZ space-time diagram of paleoearthquakes with synthetic paleoseismic histories indicates that the observed temporal clusters and gaps have about an equal probability (depending on model assumptions) of reflecting random coincidence as opposed to intersegment contagion. Regional seismicity suggests that for exposure times of 50 and 100 years, the probability for an earthquake of M>7 anywhere within the Wasatch Front region, based on a Poisson model, is 0.16 and 0.30, respectively. A fault-specific WFZ model predicts 50 and 100 year probabilities for a M>7 earthquake on the WFZ itself, based on a Poisson model, as 0.13 and 0.25, respectively. In contrast, segment-specific earthquake probabilities that assume quasi-periodic recurrence behavior on the Weber, Provo, and Nephi segments are less (0.01-0.07 in 100 years) than the regional or fault-specific estimates (0.25-0.30 in 100 years), due to the short elapsed times compared to average recurrence intervals on those segments. The Brigham City and Salt Lake City segments, however, have time-dependent probabilities that approach or exceed the regional and fault specific probabilities. For the Salt Lake City segment, these elevated probabilities are due to the elapsed time being approximately equal to the average late Holocene recurrence time. For the Brigham City segment, the elapsed time is significantly longer than the segment-specific late Holocene recurrence time.

  6. Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.

    PubMed

    Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G

    2015-06-01

    Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The agreement against the CTA segmentations was slightly lower with a median Dice coefficient between 85% and 57%. In this work, we successfully showed the accuracy and robustness of the proposed multi-cavity segmentation scheme. This is a promising development for intraoperative procedure guidance, e.g., in cardiac electrophysiology.

  7. Segmenting Broadcast News Audiences in the New Media Environment.

    ERIC Educational Resources Information Center

    Wicks, Robert H.

    1989-01-01

    Examines the "benefit segmentation model," a marketing strategy for local news media which is capable of sorting consumers into discrete segments interested in similar salient product attributes or benefits. Concludes that benefit segmentation may provide a means by which news programmers may respond to their audience. (RS)

  8. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  9. Abdomen and spinal cord segmentation with augmented active shape models.

    PubMed

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  10. Automated segmentation of serous pigment epithelium detachment in SD-OCT images

    NASA Astrophysics Data System (ADS)

    Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2015-03-01

    Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.

  11. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  12. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    PubMed

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  13. A generative model for segmentation of tumor and organs-at-risk for radiation therapy planning of glioblastoma patients

    NASA Astrophysics Data System (ADS)

    Agn, Mikael; Law, Ian; Munck af Rosenschöld, Per; Van Leemput, Koen

    2016-03-01

    We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.

  14. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  15. A musculoskeletal foot model for clinical gait analysis.

    PubMed

    Saraswat, Prabhav; Andersen, Michael S; Macwilliams, Bruce A

    2010-06-18

    Several full body musculoskeletal models have been developed for research applications and these models may potentially be developed into useful clinical tools to assess gait pathologies. Existing full-body musculoskeletal models treat the foot as a single segment and ignore the motions of the intrinsic joints of the foot. This assumption limits the use of such models in clinical cases with significant foot deformities. Therefore, a three-segment musculoskeletal model of the foot was developed to match the segmentation of a recently developed multi-segment kinematic foot model. All the muscles and ligaments of the foot spanning the modeled joints were included. Muscle pathways were adjusted with an optimization routine to minimize the difference between the muscle flexion-extension moment arms from the model and moment arms reported in literature. The model was driven by walking data from five normal pediatric subjects (aged 10.6+/-1.57 years) and muscle forces and activation levels required to produce joint motions were calculated using an inverse dynamic analysis approach. Due to the close proximity of markers on the foot, small marker placement error during motion data collection may lead to significant differences in musculoskeletal model outcomes. Therefore, an optimization routine was developed to enforce joint constraints, optimally scale each segment length and adjust marker positions. To evaluate the model outcomes, the muscle activation patterns during walking were compared with electromyography (EMG) activation patterns reported in the literature. Model-generated muscle activation patterns were observed to be similar to the EMG activation patterns. Published by Elsevier Ltd.

  16. Modeling of the interaction between grip force and vibration transmissibility of a finger.

    PubMed

    Wu, John Z; Welcome, Daniel E; McDowell, Thomas W; Xu, Xueyan S; Dong, Ren G

    2017-07-01

    It is known that the vibration characteristics of the fingers and hand and the level of grip action interacts when operating a power tool. In the current study, we developed a hybrid finger model to simulate the vibrations of the hand-finger system when gripping a vibrating handle covered with soft materials. The hybrid finger model combines the characteristics of conventional finite element (FE) models, multi-body musculoskeletal models, and lumped mass models. The distal, middle, and proximal finger segments were constructed using FE models, the finger segments were connected via three flexible joint linkages (i.e., distal interphalangeal joint (DIP), proximal interphalangeal joint (PIP), and metacarpophalangeal (MCP) joint), and the MCP joint was connected to the ground and handle via lumped parameter elements. The effects of the active muscle forces were accounted for via the joint moments. The bone, nail, and hard connective tissues were assumed to be linearly elastic whereas the soft tissues, which include the skin and subcutaneous tissues, were considered as hyperelastic and viscoelastic. The general trends of the model predictions agree well with the previous experimental measurements in that the resonant frequency increased from proximal to the middle and to the distal finger segments for the same grip force, that the resonant frequency tends to increase with increasing grip force for the same finger segment, especially for the distal segment, and that the magnitude of vibration transmissibility tends to increase with increasing grip force, especially for the proximal segment. The advantage of the proposed model over the traditional vibration models is that it can predict the local vibration behavior of the finger to a tissue level, while taking into account the effects of the active musculoskeletal force, the effects of the contact conditions on vibrations, the global vibration characteristics. Published by Elsevier Ltd.

  17. Relationships between Heavy Metal Concentrations in Roadside Topsoil and Distance to Road Edge Based on Field Observations in the Qinghai-Tibet Plateau, China

    PubMed Central

    Yan, Xuedong; Gao, Dan; Zhang, Fan; Zeng, Chen; Xiang, Wang; Zhang, Man

    2013-01-01

    This study investigated the spatial distribution of copper (Cu), zinc (Zn), cadmium (Cd), lead (Pb), chromium (Cr), cobalt (Co), nickel (Ni) and arsenic (As) in roadside topsoil in the Qinghai-Tibet Plateau and evaluated the potential environmental risks of these roadside heavy metals due to traffic emissions. A total of 120 topsoil samples were collected along five road segments in the Qinghai-Tibet Plateau. The nonlinear regression method was used to formulize the relationship between the metal concentrations in roadside soils and roadside distance. The Hakanson potential ecological risk index method was applied to assess the degrees of heavy metal contaminations. The regression results showed that both of the heavy metals’ concentrations and their ecological risk indices decreased exponentially with the increase of roadside distance. The large R square values of the regression models indicate that the exponential regression method can suitably describe the relationship between heavy metal accumulation and roadside distance. For the entire study region, there was a moderate level of potential ecological risk within a 10 m roadside distance. However, Cd was the only prominent heavy metal which posed potential hazard to the local soil ecosystem. Overall, the rank of risk contribution to the local environments among the eight heavy metals was Cd > As > Ni > Pb > Cu > Co > Zn > Cr. Considering that Cd is a more hazardous heavy metal than other elements for public health, the local government should pay special attention to this traffic-related environmental issue. PMID:23439515

  18. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  19. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  20. A new level set model for cell image segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

Top