Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.
Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias
2016-01-01
To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.
Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf
2018-01-01
Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.
Estimates of Median Flows for Streams on the 1999 Kansas Surface Water Register
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
The Kansas State Legislature, by enacting Kansas Statute KSA 82a?2001 et. seq., mandated the criteria for determining which Kansas stream segments would be subject to classification by the State. One criterion for the selection as a classified stream segment is based on the statistic of median flow being equal to or greater than 1 cubic foot per second. As specified by KSA 82a?2001 et. seq., median flows were determined from U.S. Geological Survey streamflow-gaging-station data by using the most-recent 10 years of gaged data (KSA) for each streamflow-gaging station. Median flows also were determined by using gaged data from the entire period of record (all-available hydrology, AAH). Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating median flows for uncontrolled stream segments. The drainage area of the gaging stations on uncontrolled stream segments used in the regression analyses ranged from 2.06 to 12,004 square miles. A logarithmic transformation of the data was needed to develop the best linear relation for computing median flows. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. Tobit analyses of KSA data yielded a model standard error of prediction of 0.285 logarithmic units, and the best equations using Tobit analyses of AAH data had a model standard error of prediction of 0.250 logarithmic units. These regression equations and an interpolation procedure were used to compute median flows for the uncontrolled stream segments on the 1999 Kansas Surface Water Register. Measured median flows from gaging stations were incorporated into the regression-estimated median flows along the stream segments where available. The segments that were uncontrolled were interpolated using gaged data weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled segments of Kansas streams, the median flow information was interpolated between gaging stations using only gaged data weighted by drainage area. Of the 2,232 total stream segments on the Kansas Surface Water Register, 34.5 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second when the KSA analysis was used. When the AAH analysis was used, 36.2 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second. This report supercedes U.S. Geological Survey Water-Resources Investigations Report 02?4292.
Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong
2016-12-01
Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Interrupted time series regression for the evaluation of public health interventions: a tutorial.
Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio
2017-02-01
Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.
Interrupted time series regression for the evaluation of public health interventions: a tutorial
Bernal, James Lopez; Cummins, Steven; Gasparrini, Antonio
2017-01-01
Abstract Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design. PMID:27283160
NASA Astrophysics Data System (ADS)
Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.
2016-03-01
In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).
Khanal, Laxman; Shah, Sandip; Koirala, Sarun
2017-03-01
Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.
Chandy, Sujith J.; Naik, Girish S.; Charles, Reni; Jeyaseelan, Visalakshi; Naumova, Elena N.; Thomas, Kurien; Lundborg, Cecilia Stalsby
2014-01-01
Introduction Antibiotic pressure contributes to rising antibiotic resistance. Policy guidelines encourage rational prescribing behavior, but effectiveness in containing antibiotic use needs further assessment. This study therefore assessed the patterns of antibiotic use over a decade and analyzed the impact of different modes of guideline development and dissemination on inpatient antibiotic use. Methods Antibiotic use was calculated monthly as defined daily doses (DDD) per 100 bed days for nine antibiotic groups and overall. This time series compared trends in antibiotic use in five adjacent time periods identified as ‘Segments,’ divided based on differing modes of guideline development and implementation: Segment 1– Baseline prior to antibiotic guidelines development; Segment 2– During preparation of guidelines and booklet dissemination; Segment 3– Dormant period with no guidelines dissemination; Segment 4– Booklet dissemination of revised guidelines; Segment 5– Booklet dissemination of revised guidelines with intranet access. Regression analysis adapted for segmented time series and adjusted for seasonality assessed changes in antibiotic use trend. Results Overall antibiotic use increased at a monthly rate of 0.95 (SE = 0.18), 0.21 (SE = 0.08) and 0.31 (SE = 0.06) for Segments 1, 2 and 3, stabilized in Segment 4 (0.05; SE = 0.10) and declined in Segment 5 (−0.37; SE = 0.11). Segments 1, 2 and 4 exhibited seasonal fluctuations. Pairwise segmented regression adjusted for seasonality revealed a significant drop in monthly antibiotic use of 0.401 (SE = 0.089; p<0.001) for Segment 5 compared to Segment 4. Most antibiotic groups showed similar trends to overall use. Conclusion Use of overall and specific antibiotic groups showed varied patterns and seasonal fluctuations. Containment of rising overall antibiotic use was possible during periods of active guideline dissemination. Wider access through intranet facilitated significant decline in use. Stakeholders and policy makers are urged to develop guidelines, ensure active dissemination and enable accessibility through computer networks to contain antibiotic use and decrease antibiotic pressure. PMID:24647339
Shao, Yeqin; Gao, Yaozong; Wang, Qian; Yang, Xin; Shen, Dinggang
2015-01-01
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: 1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; 2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; 3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance. PMID:26439938
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
Bookwalter, Candice A; Venkatesh, Sudhakar K; Eaton, John E; Smyrk, Thomas D; Ehman, Richard L
2018-04-07
To determine correlation of liver stiffness measured by MR Elastography (MRE) with biliary abnormalities on MR Cholangiopancreatography (MRCP) and MRI parenchymal features in patients with primary sclerosing cholangitis (PSC). Fifty-five patients with PSC who underwent MRI of the liver with MRCP and MRE were retrospectively evaluated. Two board-certified abdominal radiologists in agreement reviewed the MRI, MRCP, and MRE images. The biliary tree was evaluated for stricture, dilatation, wall enhancement, and thickening at segmental duct, right main duct, left main duct, and common bile duct levels. Liver parenchyma features including signal intensity on T2W and DWI, and hyperenhancement in arterial, portal venous, and delayed phase were evaluated in nine Couinaud liver segments. Atrophy or hypertrophy of segments, cirrhotic morphology, varices, and splenomegaly were scored as present or absent. Regions of interest were placed in each of the nine segments on stiffness maps wherever available and liver stiffness (LS) was recorded. Mean segmental LS, right lobar (V-VIII), left lobar (I-III, and IVA, IVB), and global LS (average of all segments) were calculated. Spearman rank correlation analysis was performed for significant correlation. Features with significant correlation were then analyzed for significant differences in mean LS. Multiple regression analysis of MRI and MRCP features was performed for significant correlation with elevated LS. A total of 439/495 segments were evaluated and 56 segments not included in MRE slices were excluded for correlation analysis. Mean segmental LS correlated with the presence of strictures (r = 0.18, p < 0.001), T2W hyperintensity (r = 0.38, p < 0.001), DWI hyperintensity (r = 0.30, p < 0.001), and hyperenhancement of segment in all three phases. Mean LS of atrophic and hypertrophic segments were significantly higher than normal segments (7.07 ± 3.6 and 6.67 ± 3.26 vs. 5.1 ± 3.6 kPa, p < 0.001). In multiple regression analysis, only the presence of segmental strictures (p < 0.001), T2W hyperintensity (p = 0.01), and segmental hypertrophy (p < 0.001) were significantly associated with elevated segmental LS. Only left ductal stricture correlated with left lobe LS (r = 0.41, p = 0.018). Global LS correlated significantly with CBD stricture (r = 0.31, p = 0.02), number of segmental strictures (r = 0.28, p = 0.04), splenomegaly (r = 0.56, p < 0.001), and varices (r = 0.58, p < 0.001). In PSC, there is low but positive correlation between segmental LS and segmental duct strictures. Segments with increased LS show T2 hyperintensity, DWI hyperintensity, and post-contrast hyperenhancement. Global liver stiffness shows a moderate correlation with number of segmental strictures and significantly correlates with spleen stiffness, splenomegaly, and varices.
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
Hidden marker position estimation during sit-to-stand with walker.
Yoon, Sang Ho; Jun, Hong Gul; Dan, Byung Ju; Jo, Byeong Rim; Min, Byung Hoon
2012-01-01
Motion capture analysis of sit-to-stand task with assistive device is hard to achieve due to obstruction on reflective makers. Previously developed robotic system, Smart Mobile Walker, is used as an assistive device to perform motion capture analysis in sit-to-stand task. All lower limb markers except hip markers are invisible through whole session. The link-segment and regression method is applied to estimate the marker position during sit-to-stand. Applying a new method, the lost marker positions are restored and the biomechanical evaluation of the sit-to-stand movement with a Smart Mobile Walker could be carried out. The accuracy of the marker position estimation is verified with normal sit-to-stand data from more than 30 clinical trials. Moreover, further research on improving the link segment and regression method is addressed.
Ansari, Faranak; Gray, Kirsteen; Nathwani, Dilip; Phillips, Gabby; Ogston, Simon; Ramsay, Craig; Davey, Peter
2003-11-01
To evaluate an intervention to reduce inappropriate use of key antibiotics with interrupted time series analysis. The intervention is a policy for appropriate use of Alert Antibiotics (carbapenems, glycopeptides, amphotericin, ciprofloxacin, linezolid, piperacillin-tazobactam and third-generation cephalosporins) implemented through concurrent, patient-specific feedback by clinical pharmacists. Statistical significance and effect size were calculated by segmented regression analysis of interrupted time series of drug use and cost for 2 years before and after the intervention started. Use of Alert Antibiotics increased before the intervention started but decreased steadily for 2 years thereafter. The changes in slope of the time series were 0.27 defined daily doses/100 bed-days per month (95% CI 0.19-0.34) and pound 1908 per month (95% CI pound 1238- pound 2578). The cost of development, dissemination and implementation of the intervention ( pound 20133) was well below the most conservative estimate of the reduction in cost ( pound 133296), which is the lower 95% CI of effect size assuming that cost would not have continued to increase without the intervention. However, if use had continued to increase, the difference between predicted and actual cost of Alert Antibiotics was pound 572448 (95% CI pound 435696- pound 709176) over the 24 months after the intervention started. Segmented regression analysis of pharmacy stock data is a simple, practical and robust method for measuring the impact of interventions to change prescribing. The Alert Antibiotic Monitoring intervention was associated with significant decreases in total use and cost in the 2 years after the programme was implemented. In our hospital, the value of the data far exceeded the cost of processing and analysis.
Clinical Prognosis of Superior Versus Basal Segment Stage I Non-Small Cell Lung Cancer.
Handa, Yoshinori; Tsutani, Yasuhiro; Tsubokawa, Norifumi; Misumi, Keizo; Hanaki, Hideaki; Miyata, Yoshihiro; Okada, Morihito
2017-12-01
Despite its extensive size, variations in the clinicopathologic features of tumors in the lower lobe have been little studied. The present study investigated the prognostic differences in tumors originating from the superior and basal segments of the lower lobe in patients with non-small cell lung cancer. Data of 134 patients who underwent lobectomy or segmentectomy with systematic nodal dissection for clinical stage I, radiologically solid-dominant, non-small cell lung cancer in the superior segment (n = 60) or basal segment (n = 74) between April 2007 and December 2015 were retrospectively reviewed. Factors affecting survival were assessed by the Kaplan-Meier method and Cox regression analyses. Prognosis in the superior segment group was worse than that in the basal segment group (5-year overall survival rates 62.6% versus 89.9%, p = 0.0072; and 5-year recurrence-free survival rates 54.4% versus 75.7%, p = 0.032). In multivariable Cox regression analysis, a superior segment tumor was an independent factor for poor overall survival (hazard ratio 3.33, 95% confidence interval: 1.22 to 13.5, p = 0.010) and recurrence-free survival (hazard ratio 2.90, 95% confidence interval: 1.20 to 7.00, p = 0.008). The superior segment group tended to have more pathologic mediastinal lymph node metastases than the basal segment group (15.0% versus 5.4%, p = 0.080). Tumor location was a prognostic factor for clinical stage I non-small cell lung cancer in the lower lobe. Patients with superior segment tumors had worse prognosis than patients with basal segment tumors, with more metastases in mediastinal lymph nodes. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Large data series: Modeling the usual to identify the unusual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, D.J.; Fedorov, V.V.; Lawkins, W.F.
{open_quotes}Standard{close_quotes} approaches such as regression analysis, Fourier analysis, Box-Jenkins procedure, et al., which handle a data series as a whole, are not useful for very large data sets for at least two reasons. First, even with computer hardware available today, including parallel processors and storage devices, there are no effective means for manipulating and analyzing gigabyte, or larger, data files. Second, in general it can not be assumed that a very large data set is {open_quotes}stable{close_quotes} by the usual measures, like homogeneity, stationarity, and ergodicity, that standard analysis techniques require. Both reasons dictate the necessity to use {open_quotes}local{close_quotes} data analysismore » methods whereby the data is segmented and ordered, where order leads to a sense of {open_quotes}neighbor,{close_quotes} and then analyzed segment by segment. The idea of local data analysis is central to the study reported here.« less
NASA Astrophysics Data System (ADS)
Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.
1989-04-01
Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.
Segmentation of optic disc and optic cup in retinal fundus images using shape regression.
Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil
2016-08-01
Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma
Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.
2017-01-01
Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296
Wu, Ping-An; Li, Yun-Liang; Wu, Han-Jiang; Wang, Kai; Fan, Guo-Zheng
2007-09-01
To investigate the relationship between muscle segment homeobox gene-1 (MSX1) and the genetic susceptibility of nonsyndromic cleft lip and palate (NSCLP) in Hunan Hans. One microsatellite DNA marker CA repeat in MSX1 intron region was used as genetic marker. The genotypes of 387 members in 129 NSCLP nuclear family trios were analyzed by polymerase chain reaction (PCR) and denaturing polyacrylamide gel electrophoresis. Then transmission disequilibrium test (TDT) and Logistic regression analysis were used to conduct association analysis. TDT analysis confirmed that CA4 allele in CL/P and CPO groups preferentially transmitted to the affected offspring (P = 0.018, P = 0.041). Logistic regression analysis indicated that the recessive model of inheritance was supported, and CA4 itself or CA4 acting as a marker for a disease allele or haplotype was inherited in a recessive fashion (P = 0.009). MSX1 gene is associated with NSCLP, and MSX1 gene may be directly involved either in the etiology of NSCLP or in linkage disequilibrium with disease-predisposing sites.
Nucleus detection using gradient orientation information and linear least squares regression
NASA Astrophysics Data System (ADS)
Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.
2015-03-01
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki
2017-02-01
This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.
Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.
Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L
2010-07-01
The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.
Bashir, Usman; Azad, Gurdip; Siddique, Muhammad Musib; Dhillon, Saana; Patel, Nikheel; Bassett, Paul; Landau, David; Goh, Vicky; Cook, Gary
2017-12-01
Measures of tumour heterogeneity derived from 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) scans are increasingly reported as potential biomarkers of non-small cell lung cancer (NSCLC) for classification and prognostication. Several segmentation algorithms have been used to delineate tumours, but their effects on the reproducibility and predictive and prognostic capability of derived parameters have not been evaluated. The purpose of our study was to retrospectively compare various segmentation algorithms in terms of inter-observer reproducibility and prognostic capability of texture parameters derived from non-small cell lung cancer (NSCLC) 18 F-FDG PET/CT images. Fifty three NSCLC patients (mean age 65.8 years; 31 males) underwent pre-chemoradiotherapy 18 F-FDG PET/CT scans. Three readers segmented tumours using freehand (FH), 40% of maximum intensity threshold (40P), and fuzzy locally adaptive Bayesian (FLAB) algorithms. Intraclass correlation coefficient (ICC) was used to measure the inter-observer variability of the texture features derived by the three segmentation algorithms. Univariate cox regression was used on 12 commonly reported texture features to predict overall survival (OS) for each segmentation algorithm. Model quality was compared across segmentation algorithms using Akaike information criterion (AIC). 40P was the most reproducible algorithm (median ICC 0.9; interquartile range [IQR] 0.85-0.92) compared with FLAB (median ICC 0.83; IQR 0.77-0.86) and FH (median ICC 0.77; IQR 0.7-0.85). On univariate cox regression analysis, 40P found 2 out of 12 variables, i.e. first-order entropy and grey-level co-occurence matrix (GLCM) entropy, to be significantly associated with OS; FH and FLAB found 1, i.e., first-order entropy. For each tested variable, survival models for all three segmentation algorithms were of similar quality, exhibiting comparable AIC values with overlapping 95% CIs. Compared with both FLAB and FH, segmentation with 40P yields superior inter-observer reproducibility of texture features. Survival models generated by all three segmentation algorithms are of at least equivalent utility. Our findings suggest that a segmentation algorithm using a 40% of maximum threshold is acceptable for texture analysis of 18 F-FDG PET in NSCLC.
Pirlich, M; Schütz, T; Ockenga, J; Biering, H; Gerl, H; Schmidt, B; Ertl, S; Plauth, M; Lochs, H
2003-04-01
Estimation of body cell mass (BCM) has been regarded valuable for the assessment of malnutrition. To investigate the value of segmental bioelectrical impedance analysis (BIA) for BCM estimation in malnourished subjects and acromegaly. Nineteen controls and 63 patients with either reduced (liver cirrhosis without and with ascites, Cushing's disease) or increased BCM (acromegaly) were included. Whole-body and segmental BIA (separately measuring arm, trunk, leg) at 50 kHz was compared with BCM measured by total-body potassium. Multiple regression analysis was used to develop specific equations for BCM in each subgroup. Compared to whole-body BIA equations, the inclusion of arm resistance improved the specific equation in cirrhotic patients without ascites and in Cushing's disease resulting in excellent prediction of BCM (R(2) = 0.93 and 0.92, respectively; both P<0.001). In acromegaly, inclusion of resistance and reactance of the trunk best described BCM (R(2) = 0.94, P<0.001). In controls and in cirrhotic patients with ascites, segmental impedance parameters did not improve BCM prediction (best values obtained by whole-body measurements: R(2)=0.88 and 0.60; P<0.001 and <0.003, respectively). Segmental BIA improves the assessment of BCM in malnourished patients and acromegaly, but not in patients with severe fluid overload. Copyright 2003 Elsevier Science Ltd.
Historical Data Analysis of Hospital Discharges Related to the Amerithrax Attack in Florida
Burke, Lauralyn K.; Brown, C. Perry; Johnson, Tammie M.
2016-01-01
Interrupted time-series analysis (ITSA) can be used to identify, quantify, and evaluate the magnitude and direction of an event on the basis of time-series data. This study evaluates the impact of the bioterrorist anthrax attacks (“Amerithrax”) on hospital inpatient discharges in the metropolitan statistical area of Palm Beach, Broward, and Miami-Dade counties in the fourth quarter of 2001. Three statistical methods—standardized incidence ratio (SIR), segmented regression, and an autoregressive integrated moving average (ARIMA)—were used to determine whether Amerithrax influenced inpatient utilization. The SIR found a non–statistically significant 2 percent decrease in hospital discharges. Although the segmented regression test found a slight increase in the discharge rate during the fourth quarter, it was also not statistically significant; therefore, it could not be attributed to Amerithrax. Segmented regression diagnostics preparing for ARIMA indicated that the quarterly data time frame was not serially correlated and violated one of the assumptions for the use of the ARIMA method and therefore could not properly evaluate the impact on the time-series data. Lack of data granularity of the time frames hindered the successful evaluation of the impact by the three analytic methods. This study demonstrates that the granularity of the data points is as important as the number of data points in a time series. ITSA is important for the ability to evaluate the impact that any hazard may have on inpatient utilization. Knowledge of hospital utilization patterns during disasters offer healthcare and civic professionals valuable information to plan, respond, mitigate, and evaluate any outcomes stemming from biothreats. PMID:27843420
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
NASA Astrophysics Data System (ADS)
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
Beef quality grading using machine vision
NASA Astrophysics Data System (ADS)
Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha
2000-12-01
A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.
Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T
2018-03-08
Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.
Fast and robust segmentation of the striatum using deep convolutional neural networks.
Choi, Hongyoon; Jin, Kyong Hwan
2016-12-01
Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Garde, Ainara; Dehkordi, Parastoo; Wensley, David; Ansermino, J Mark; Dumont, Guy A
2015-01-01
Obstructive sleep apnea (OSA) disrupts normal ventilation during sleep and can lead to serious health problems in children if left untreated. Polysomnography, the gold standard for OSA diagnosis, is resource intensive and requires a specialized laboratory. Thus, we proposed to use the Phone Oximeter™, a portable device integrating pulse oximetry with a smartphone, to detect OSA events. As a proportion of OSA events occur without oxygen desaturation (defined as SpO2 decreases ≥ 3%), we suggest combining SpO2 and pulse rate variability (PRV) analysis to identify all OSA events and provide a more detailed sleep analysis. We recruited 160 children and recorded pulse oximetry consisting of SpO2 and plethysmography (PPG) using the Phone Oximeter™, alongside standard polysomnography. A sleep technician visually scored all OSA events with and without oxygen desaturation from polysomnography. We divided pulse oximetry signals into 1-min signal segments and extracted several features from SpO2 and PPG analysis in the time and frequency domain. Segments with OSA, especially the ones with oxygen desaturation, presented greater SpO2 variability and modulation reflected in the spectral domain than segments without OSA. Segments with OSA also showed higher heart rate and sympathetic activity through the PRV analysis relative to segments without OSA. PRV analysis was more sensitive than SpO2 analysis for identification of OSA events without oxygen desaturation. Combining SpO2 and PRV analysis enhanced OSA event detection through a multiple logistic regression model. The area under the ROC curve increased from 81% to 87%. Thus, the Phone Oximeter™ might be useful to monitor sleep and identify OSA events with and without oxygen desaturation at home.
Mulinari, Shai; Barmchi, Mojgan Padash
2008-01-01
Morphogenesis of the Drosophila embryo is associated with dynamic rearrangement of the actin cytoskeleton mediated by small GTPases of the Rho family. These GTPases act as molecular switches that are activated by guanine nucleotide exchange factors. One of these factors, DRhoGEF2, plays an important role in the constriction of actin filaments during pole cell formation, blastoderm cellularization, and invagination of the germ layers. Here, we show that DRhoGEF2 is equally important during morphogenesis of segmental grooves, which become apparent as tissue infoldings during mid-embryogenesis. Examination of DRhoGEF2-mutant embryos indicates a role for DRhoGEF2 in the control of cell shape changes during segmental groove morphogenesis. Overexpression of DRhoGEF2 in the ectoderm recruits myosin II to the cell cortex and induces cell contraction. At groove regression, DRhoGEF2 is enriched in cells posterior to the groove that undergo apical constriction, indicating that groove regression is an active process. We further show that the Formin Diaphanous is required for groove formation and strengthens cell junctions in the epidermis. Morphological analysis suggests that Dia regulates cell shape in a way distinct from DRhoGEF2. We propose that DRhoGEF2 acts through Rho1 to regulate acto-myosin constriction but not Diaphanous-mediated F-actin nucleation during segmental groove morphogenesis. PMID:18287521
Logistic Stick-Breaking Process
Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.
2013-01-01
A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593
Zipursky, Robert B; Cunningham, Charles E; Stewart, Bailey; Rimas, Heather; Cole, Emily; Vaz, Stephanie McDermid
2017-07-01
The majority of individuals with schizophrenia will achieve a remission of psychotic symptoms, but few will meet criteria for recovery. Little is known about what outcomes are important to patients. We carried out a discrete choice experiment to characterize the outcome preferences of patients with psychotic disorders. Participants (N=300) were recruited from two clinics specializing in psychotic disorders. Twelve outcomes were each defined at three levels and incorporated into a computerized survey with 15 choice tasks. Utility values and importance scores were calculated for each outcome level. Latent class analysis was carried out to determine whether participants were distributed into segments with different preferences. Multinomial logistic regression was used to identify predictors of segment membership. Latent class analysis revealed three segments of respondents. The first segment (48%), which we labeled "Achievement-focused," preferred to have a full-time job, to live independently, to be in a long-term relationship, and to have no psychotic symptoms. The second segment (29%), labeled "Stability-focused," preferred to not have a job, to live independently, and to have some ongoing psychotic symptoms. The third segment (23%), labeled "Health-focused," preferred to not have a job, to live in supervised housing, and to have no psychotic symptoms. Segment membership was predicted by education, socioeconomic status, psychotic symptom severity, and work status. This study has revealed that patients with psychotic disorders are distributed between segments with different outcome preferences. New approaches to improve outcomes for patients with psychotic disorders should be informed by a greater understanding of patient preferences and priorities. Copyright © 2016 Elsevier B.V. All rights reserved.
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A
2017-07-01
Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye
Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael
2017-01-01
Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Estimation of stature from radiologic anthropometry of the lumbar vertebral dimensions in Chinese.
Zhang, Kui; Chang, Yun-feng; Fan, Fei; Deng, Zhen-hua
2015-11-01
The recent study was to assess the relationship between the radiologic anthropometry of the lumbar vertebral dimensions and stature in Chinese and to develop regression formulae to estimate stature from these dimensions. A total of 412 normal, healthy volunteers, comprising 206 males and 206 females, were recruited. The linear regression analysis were performed to assess the correlation between the stature and lengths of various segments of the lumbar vertebral column. Among the regression equations created for single variable, the predictive value was greatest for the reconstruction of stature from the lumbar segment in both sexes and subgroup analysis. When individual vertebral body was used, the heights of posterior vertebral body of L3 gave the most accurate results for male group, the heights of central vertebral body of L1 provided the most accurate results for female group and female group with age above 45 years, the heights of central vertebral body of L3 gave the most accurate results for the groups with age from 20-45 years for both sexes and the male group with age above 45 years. The heights of anterior vertebral body of L5 gave the less accurate results except for the heights of anterior vertebral body of L4 provided the less accurate result for the male group with age above 45 years. As expected, multiple regression equations were more successful than equations derived from a single variable. The research observations suggest lumbar vertebral dimensions to be useful in stature estimation among Chinese population. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953
Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Prevalence of Incidental Clinoid Segment Saccular Aneurysms.
Revilla-Pacheco, Francisco; Escalante-Seyffert, María Cecilia; Herrada-Pineda, Tenoch; Manrique-Guzman, Salvador; Perez-Zuniga, Irma; Rangel-Suarez, Sergio; Rubalcava-Ortega, Johnatan; Loyo-Varela, Mauro
2018-04-12
Clinoid segment aneurysms are cerebral vascular lesions recently described in the neurosurgical literature. They arise from the clinoid segment of the internal carotid artery, which is the segment limited rostrally by the dural carotid ring and caudally, by the carotid-oculomotor membrane. Even although clinoid segment aneurysms represent a common incidental finding in magnetic resonance studies, its prevalence has not been yet reported. To determine the prevalence of incidental clinoid segment saccular aneurysms diagnosed by magnetic resonance imaging as well as their anatomic architecture and their association with smoking, arterial hypertension, age, and sex of patients. A total of 500 patients were prospectively studied with magnetic resonance imaging time-of-flight sequence and angioresonance with contrast material, to search for incidental saccular intracranial aneurysms. The site of primary interest was the clinoid segment, but the presence of aneurysms in any other location was determined for comparison. The relation among the presence of clinoid segment aneurysms, demographic factors, and secondary diagnosis of arterial hypertension, smoking, and other vascular/neoplastic cerebral lesions was analyzed. We found a global prevalence of incidental aneurysms of 7% (95% confidence interval, 5-9), with a prevalence of clinoid segment aneurysms of 3% (95% confidence interval, 2-4). Univariate logistic regression analysis showed a statistically significant relationship among incidental aneurysms, systemic arterial hypertension (P = 0.000), and smoking (P = 0.004). In the studied population, incidental clinoid segment aneurysms constitute the variety with highest prevalence. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.
2015-03-01
During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.
ERIC Educational Resources Information Center
Bodner, Todd E.
2016-01-01
This article revisits how the end points of plotted line segments should be selected when graphing interactions involving a continuous target predictor variable. Under the standard approach, end points are chosen at ±1 or 2 standard deviations from the target predictor mean. However, when the target predictor and moderator are correlated or the…
Magnetic resonance T1 gradient-echo imaging in hepatolithiasis.
Safar, F; Kamura, T; Okamuto, K; Sasai, K; Gejyo, F
2005-01-01
We examined the role of magnetic resonance T1-weighted gradient-echo (MRT1-GE) imaging in hepatolithiasis. MRT1-GE, precontrast computed tomography (CT), and magnetic resonance cholangiopancreatography (MRCP) of 10 patients with hepatolithiasis were compared for their diagnostic accuracies in the detection and localization of intrahepatic calculi. The diagnosis of hepatolithiasis was confirmed by surgery. For localization of the stone, we divided the bile ducts into six areas: right and left hepatic ducts and bile ducts of the lateral, medial, right anterior, and right posterior segments of the liver. Chemical analysis of the stones was performed in eight patients. The total number of segments proved by surgery to contain stones was 18. Although not significantly different, the sensitivity of MRT1-GE was 77.8% (14 of 18 segments), higher than that of MRCP (66.7%, 12 of 18 segments) and that of CT (50%, nine of 18 segments). The sensitivity of magnetic resonance imaging (MRCP + MRT1) was significantly higher than that of CT (p < 0.01). Multiple logistic regression analysis showed that the result of surgery was significantly affected only by the result of magnetic resonance imaging. On MRT1-GE, all the depicted stones appeared as high-intensity signal areas within the low-intensity bile duct irrespective of their chemical composition. MRT1-GE imaging provides complementary information concerning hepatolithiasis.
Bybee, Kevin A; Motiei, Arashk; Syed, Imran S; Kara, Tomas; Prasad, Abhiram; Lennon, Ryan J; Murphy, Joseph G; Hammill, Stephen C; Rihal, Charanjit S; Wright, R Scott
2007-01-01
The presentation and electrocardiographic (ECG) characteristics of transient left ventricular apical ballooning syndrome (TLVABS) can be similar to that of anterior ST-segment elevation myocardial infarction (STEMI). We tested the hypothesis that the ECG on presentation could reliably differentiate these syndromes. Between January 1, 2002 and July 31, 2004, we identified 18 consecutive patients with TLVABS who were matched with 36 subjects presenting with acute anterior STEMI due to atherothrombotic left anterior descending coronary artery occlusion. All patients with TLVABS were women (mean age, 72.0 +/- 13.1 years). The heart rate, PR interval, QRS duration, and corrected QT interval were similar between groups. Distribution of ST elevation was similar, but patients with anterior STEMI exhibited greater ST elevation. Regressive partitioning analysis indicated that the combination of ST elevation in lead V2 of less than 1.75 mm and ST-segment elevation in lead V3 of less than 2.5 mm was a suggestive predictor of TLVABS (sensitivity, 67%; specificity, 94%). Conditional logistic regression indicated that the formula: (3 x ST-elevation lead V2) + (ST-elevation V3) + (2 x ST-elevation V5) allowed possible discrimination between TLVABS and anterior STEMI with an optimal cutoff level of less than 11.5 mm for TLVABS (sensitivity, 94%; specificity, 72%). Patients with TLVABS were less likely to have concurrent ST-segment depression (6% vs 44%; P = .003). Women presenting with TLVABS have similar ECG findings to patients with anterior infarct but with less-prominent ST-segment elevation in the anterior precordial ECG leads. These ECG findings are relatively subtle and do not have sufficient predictive value to allow reliable emergency differentiation of these syndromes.
Hitt, Nathaniel P.; Floyd, Michael; Compton, Michael; McDonald, Kenneth
2016-01-01
Chrosomus cumberlandensis (Blackside Dace [BSD]) and Etheostoma spilotum (Kentucky Arrow Darter [KAD]) are fish species of conservation concern due to their fragmented distributions, their low population sizes, and threats from anthropogenic stressors in the southeastern United States. We evaluated the relationship between fish abundance and stream conductivity, an index of environmental quality and potential physiological stressor. We modeled occurrence and abundance of KAD in the upper Kentucky River basin (208 samples) and BSD in the upper Cumberland River basin (294 samples) for sites sampled between 2003 and 2013. Segmented regression indicated a conductivity change-point for BSD abundance at 343 μS/cm (95% CI: 123–563 μS/cm) and for KAD abundance at 261 μS/cm (95% CI: 151–370 μS/cm). In both cases, abundances were negligible above estimated conductivity change-points. Post-hoc randomizations accounted for variance in estimated change points due to unequal sample sizes across the conductivity gradients. Boosted regression-tree analysis indicated stronger effects of conductivity than other natural and anthropogenic factors known to influence stream fishes. Boosted regression trees further indicated threshold responses of BSD and KAD occurrence to conductivity gradients in support of segmented regression results. We suggest that the observed conductivity relationship may indicate energetic limitations for insectivorous fishes due to changes in benthic macroinvertebrate community composition.
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Dumitrescu, Alina V.; van Ginneken, Bram; Abrámoff, Michael D.
2011-03-01
Parameters extracted from the vasculature on the retina are correlated with various conditions such as diabetic retinopathy and cardiovascular diseases such as stroke. Segmentation of the vasculature on the retina has been a topic that has received much attention in the literature over the past decade. Analysis of the segmentation result, however, has only received limited attention with most works describing methods to accurately measure the width of the vessels. Analyzing the connectedness of the vascular network is an important step towards the characterization of the complete vascular tree. The retinal vascular tree, from an image interpretation point of view, originates at the optic disc and spreads out over the retina. The tree bifurcates and the vessels also cross each other. The points where this happens form the key to determining the connectedness of the complete tree. We present a supervised method to detect the bifurcations and crossing points of the vasculature of the retina. The method uses features extracted from the vasculature as well as the image in a location regression approach to find those locations of the segmented vascular tree where the bifurcation or crossing occurs (from here, POI, points of interest). We evaluate the method on the publicly available DRIVE database in which an ophthalmologist has marked the POI.
Wilke, Marko
2018-02-01
This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu
2000-08-01
The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.
Liu, Chenxi; Zhang, Xinping; Wan, Jie
2015-08-01
Inappropriate use and overuse of antibiotics and injections are serious threats to the global population, particularly in developing countries. In recent decades, public reporting of health care performance (PRHCP) has been an instrument to improve the quality of care. However, existing evidence shows a mixed effect of PRHCP. This study evaluated the effect of PRHCP on physicians' prescribing practices in a sample of primary care institutions in China. Segmented regression analysis was used to produce convincing evidence for health policy and reform. The PRHCP intervention was implemented in Qian City that started on 1 October 2013. Performance data on prescription statistics were disclosed to patients and health workers monthly in 10 primary care institutions. A total of 326 655 valid outpatient prescriptions were collected. Monthly effective prescriptions were calculated as analytical units in the research (1st to 31st every month). This study involved multiple assessments of outcomes 13 months before and 11 months after PRHCP intervention (a total of 24 data points). Segmented regression models showed downward trends from baseline on antibiotics (coefficient = -0.64, P = 0.004), combined use of antibiotics (coefficient = -0.41, P < 0.001) and injections (coefficient = -0.5957, P = 0.001) after PRHCP intervention. The average expenditure of patients slightly increased monthly before the intervention (coefficient = 0.8643, P < 0.001); PRHCP intervention also led to a temporary increase in average expenditure of patients (coefficient = 2.20, P = 0.307) but slowed down the ascending trend (coefficient = -0.45, P = 0.033). The prescription rate of antibiotics and injections after intervention (about 50%) remained high. PRHCP showed positive effects on physicians' prescribing behaviour, considering the downward trends on the use of antibiotics and injections and average expenditure through the intervention. However, the effect was not immediately observed; a lag time existed before public reporting intervention worked. © 2015 John Wiley & Sons, Ltd.
Advances in segmentation modeling for health communication and social marketing campaigns.
Albrecht, T L; Bryant, C
1996-01-01
Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J; Nishikawa, R; Reiser, I
Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benignmore » or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification performance. The best segmentation Result does not necessarily lead to the best classification Result. This work has been supported in part by grants from the NIH R21-EB015053. R Nishikawa is receives royalties form Hologic, Inc.« less
Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente
2016-05-01
Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.
Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.
Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C
2017-07-01
To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.
Shi, Xiaocai; Passe, Dennis H
2010-10-01
The purpose of this study is to summarize water, carbohydrate (CHO), and electrolyte absorption from carbohydrate-electrolyte (CHO-E) solutions based on all of the triple-lumen-perfusion studies in humans since the early 1960s. The current statistical analysis included 30 reports from which were obtained information on water absorption, CHO absorption, total solute absorption, CHO concentration, CHO type, osmolality, sodium concentration, and sodium absorption in the different gut segments during exercise and at rest. Mean differences were assessed using independent-samples t tests. Exploratory multiple-regression analyses were conducted to create prediction models for intestinal water absorption. The factors influencing water and solute absorption are carefully evaluated and extensively discussed. The authors suggest that in the human proximal small intestine, water absorption is related to both total solute and CHO absorption; osmolality exerts various impacts on water absorption in the different segments; the multiple types of CHO in the ingested CHO-E solutions play a critical role in stimulating CHO, sodium, total solute, and water absorption; CHO concentration is negatively related to water absorption; and exercise may result in greater water absorption than rest. A potential regression model for predicting water absorption is also proposed for future research and practical application. In conclusion, water absorption in the human small intestine is influenced by osmolality, solute absorption, and the anatomical structures of gut segments. Multiple types of CHO in a CHO-E solution facilitate water absorption by stimulating CHO and solute absorption and lowering osmolality in the intestinal lumen.
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.
Bronner, Shaw; Wood, Lily
2017-11-01
There is ongoing debate about how to define injury in dance: the most encompassing one or a time-loss definition. We examined the relationship between touring, performance schedule and injury definition on injury rates in a professional modern dance company over one-year. In-house healthcare management tracked 35 dancers for work-related musculoskeletal injuries (WMSI), time-loss injuries (TLinj), complaints, and exposure. The year was divided into 6 segments to allow comparison of effects of performance, rehearsal, and touring. Injuries/segment were converted into injuries/1000-h dance exposure. We conducted negative binomial regression analysis to determine differences between segments, P ≤ 0.05. Twenty WMSI, 0.44 injuries/1000-h, were sustained over one-year. WMSI were 6 times more likely to occur in Segment-6, compared with other segments (incident rate ratio = 6.055, P = 0.031). The highest rate of TLinj and traumatic injuries also occurred in Segment-6, reflecting concentrated rehearsal, New York season and performances abroad. More overuse injuries occurred in Segment-2, an international tour, attributed to raked stages. Lack of methods to quantify performance other than injury may mask effects of touring on dancer's well-being. Tracking complaints permits understanding of stressors to specific body regions and healthcare utilisation; however, TLinj remain the most important injuries to track because they impact other dancers and organisational costs.
Determining degree of optic nerve edema from color fundus photography
NASA Astrophysics Data System (ADS)
Agne, Jason; Wang, Jui-Kai; Kardon, Randy H.; Garvin, Mona K.
2015-03-01
Swelling of the optic nerve head (ONH) is subjectively assessed by clinicians using the Frisén scale. It is believed that a direct measurement of the ONH volume would serve as a better representation of the swelling. However, a direct measurement requires optic nerve imaging with spectral domain optical coherence tomography (SD-OCT) and 3D segmentation of the resulting images, which is not always available during clinical evaluation. Furthermore, telemedical imaging of the eye at remote locations is more feasible with non-mydriatic fundus cameras which are less costly than OCT imagers. Therefore, there is a critical need to develop a more quantitative analysis of optic nerve swelling on a continuous scale, similar to SD-OCT. Here, we select features from more commonly available 2D fundus images and use them to predict ONH volume. Twenty-six features were extracted from each of 48 color fundus images. The features include attributes of the blood vessels, optic nerve head, and peripapillary retina areas. These features were used in a regression analysis to predict ONH volume, as computed by a segmentation of the SD-OCT image. The results of the regression analysis yielded a mean square error of 2.43 mm3 and a correlation coefficient between computed and predicted volumes of R = 0:771, which suggests that ONH volume may be predicted from fundus features alone.
Site conditions related to erosion on logging roads
R. M. Rice; J. D. McCashion
1985-01-01
Synopsis - Data collected from 299 road segments in northwestern California were used to develop and test a procedure for estimating and managing road-related erosion. Site conditions and the design of each segment were described by 30 variables. Equations developed using 149 of the road segments were tested on the other 150. The best multiple regression equation...
Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting
NASA Astrophysics Data System (ADS)
Palenichka, Roman M.; Zaremba, Marek B.
2003-03-01
Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.
Zheng, Qian-Yin; Xu, Wen; Liang, Guan-Lu; Wu, Jing; Shi, Jun-Ting
2016-01-01
To investigate the correlation between the preoperative biometric parameters of the anterior segment and the vault after implantable Collamer lens (ICL) implantation via this retrospective study. Retrospective clinical study. A total of 78 eyes from 41 patients who underwent ICL implantation surgery were included in this study. Preoperative biometric parameters, including white-to-white (WTW) diameter, central corneal thickness, keratometer, pupil diameter, anterior chamber depth, sulcus-to-sulcus diameter, anterior chamber area (ACA) and central curvature radius of the anterior surface of the lens (Lenscur), were measured. Lenscur and ACA were measured with Rhinoceros 5.0 software on the image scanned with ultrasound biomicroscopy (UBM). The vault was assessed by UBM 3 months after surgery. Multiple stepwise regression analysis was employed to identify the variables that were correlated with the vault. The results showed that the vault was correlated with 3 variables: ACA (22.4 ± 4.25 mm2), WTW (11.36 ± 0.29 mm) and Lenscur (9.15 ± 1.21 mm). The regressive equation was: vault (mm) = 1.785 + 0.017 × ACA + 0.051 × Lenscur - 0.203 × WTW. Biometric parameters of the anterior segment (ACA, WTW and Lenscur) can predict the vault after ICL implantation using a new regression equation. © 2016 The Author(s) Published by S. Karger AG, Basel.
Power, Alyssa; Poonja, Sabrina; Disler, Dal; Myers, Kimberley; Patton, David J; Mah, Jean K; Fine, Nowell M; Greenway, Steven C
2017-01-01
Advances in medical care for patients with Duchenne muscular dystrophy (DMD) have resulted in improved survival and an increased prevalence of cardiomyopathy. Serial echocardiographic surveillance is recommended to detect early cardiac dysfunction and initiate medical therapy. Clinical anecdote suggests that echocardiographic quality diminishes over time, impeding accurate assessment of left ventricular systolic function. Furthermore, evidence-based guidelines for the use of cardiac imaging in DMD, including cardiac magnetic resonance imaging (CMR), are limited. The objective of our single-center, retrospective study was to quantify the deterioration in echocardiographic image quality with increasing patient age and identify an age at which CMR should be considered. We retrospectively reviewed and graded the image quality of serial echocardiograms obtained in young patients with DMD. The quality of 16 left ventricular segments in two echocardiographic views was visually graded using a binary scoring system. An endocardial border delineation percentage (EBDP) score was calculated by dividing the number of segments with adequate endocardial delineation in each imaging window by the total number of segments present in that window and multiplying by 100. Linear regression analysis was performed to model the relationship between the EBDP scores and patient age. Fifty-five echocardiograms from 13 patients (mean age 11.6 years, range 3.6-19.9) were systematically reviewed. By 13 years of age, 50% of the echocardiograms were classified as suboptimal with ≥30% of segments inadequately visualized, and by 15 years of age, 78% of studies were suboptimal. Linear regression analysis revealed a negative correlation between patient age and EBDP score ( r = -2.49, 95% confidence intervals -4.73, -0.25; p = 0.032), with the score decreasing by 2.5% for each 1 year increase in age. Echocardiographic image quality declines with increasing age in DMD. Alternate imaging modalities may play a role in cases of poor echocardiographic image quality.
Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment
Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui
2015-12-01
This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.
Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.
Cunningham, Ryan J; Harding, Peter J; Loram, Ian D
2017-02-01
Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.
A new model for the determination of limb segment mass in children.
Kuemmerle-Deschner, J B; Hansmann, S; Rapp, H; Dannecker, G E
2007-04-01
The knowledge of limb segment masses is critical for the calculation of joint torques. Several methods for segment mass estimation have been described in the literature. They are either inaccurate or not applicable to the limb segments of children. Therefore, we developed a new cylinder brick model (CBM) to estimate segment mass in children. The aim of this study was to compare CBM and a model based on a polynomial regression equation (PRE) to volume measurement obtained by the water displacement method (WDM). We examined forearms, hands, lower legs, and feet of 121 children using CBM, PRE, and WDM. The differences between CBM and WDM or PRE and WDM were calculated and compared using a Bland-Altman plot of differences. Absolute limb segment mass measured by WDM ranged from 0.16+/-0.04 kg for hands in girls 5-6 years old, up to 2.72+/-1.03 kg for legs in girls 11-12 years old. The differences of normalised segment masses ranged from 0.0002+/-0.0021 to 0.0011+/-0.0036 for CBM-WDM and from 0.0023+/-0.0041 to 0.0127+/-0.036 for PRE-WDM (values are mean+/-2 S.D.). The CBM showed better agreement with WDM than PRE for all limb segments in girls and boys. CBM is accurate and superior to PRE for the estimation of individual limb segment mass of children. Therefore, CBM is a practical and useful tool for the analysis of kinetic parameters and the calculation of resulting forces to assess joint functionality in children.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
LINKING LUNG AIRWAY STRUCTURE TO PULMONARY FUNCTION VIA COMPOSITE BRIDGE REGRESSION
Chen, Kun; Hoffman, Eric A.; Seetharaman, Indu; Jiao, Feiran; Lin, Ching-Long; Chan, Kung-Sik
2017-01-01
The human lung airway is a complex inverted tree-like structure. Detailed airway measurements can be extracted from MDCT-scanned lung images, such as segmental wall thickness, airway diameter, parent-child branch angles, etc. The wealth of lung airway data provides a unique opportunity for advancing our understanding of the fundamental structure-function relationships within the lung. An important problem is to construct and identify important lung airway features in normal subjects and connect these to standardized pulmonary function test results such as FEV1%. Among other things, the problem is complicated by the fact that a particular airway feature may be an important (relevant) predictor only when it pertains to segments of certain generations. Thus, the key is an efficient, consistent method for simultaneously conducting group selection (lung airway feature types) and within-group variable selection (airway generations), i.e., bi-level selection. Here we streamline a comprehensive procedure to process the lung airway data via imputation, normalization, transformation and groupwise principal component analysis, and then adopt a new composite penalized regression approach for conducting bi-level feature selection. As a prototype of composite penalization, the proposed composite bridge regression method is shown to admit an efficient algorithm, enjoy bi-level oracle properties, and outperform several existing methods. We analyze the MDCT lung image data from a cohort of 132 subjects with normal lung function. Our results show that, lung function in terms of FEV1% is promoted by having a less dense and more homogeneous lung comprising an airway whose segments enjoy more heterogeneity in wall thicknesses, larger mean diameters, lumen areas and branch angles. These data hold the potential of defining more accurately the “normal” subject population with borderline atypical lung functions that are clearly influenced by many genetic and environmental factors. PMID:28280520
Luukkonen, Carol L.; Holtschlag, David J.; Reeves, Howard W.; Hoard, Christopher J.; Fuller, Lori M.
2015-01-01
Monthly water yields from 105,829 catchments and corresponding flows in 107,691 stream segments were estimated for water years 1951–2012 in the Great Lakes Basin in the United States. Both sets of estimates were computed by using the Analysis of Flows In Networks of CHannels (AFINCH) application within the NHDPlus geospatial data framework. AFINCH provides an environment to develop constrained regression models to integrate monthly streamflow and water-use data with monthly climatic data and fixed basin characteristics data available within NHDPlus or supplied by the user. For this study, the U.S. Great Lakes Basin was partitioned into seven study areas by grouping selected hydrologic subregions and adjoining cataloguing units. This report documents the regression models and data used to estimate monthly water yields and flows in each study area. Estimates of monthly water yields and flows are presented in a Web-based mapper application. Monthly flow time series for individual stream segments can be retrieved from the Web application and used to approximate monthly flow-duration characteristics and to identify possible trends.
Dreams Fulfilled and Shattered: Determinants of Segmented Assimilation in the Second Generation*
Haller, William; Portes, Alejandro; Lynch, Scott M.
2013-01-01
We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early adulthood on predictors measured three and six years earlier. The Children of Immigrants Longitudinal Study (CILS), used for the analysis, allows us to establish a clear temporal order among exogenous predictors and the two dependent variables. We also construct a Downward Assimilation Index (DAI), based on six indicators and regress it on the same set of predictors. Results confirm a pattern of segmented assimilation in the second generation, with a significant proportion of the sample experiencing downward assimilation. Predictors of the latter are the obverse of those of educational and occupational achievement. Significant interaction effects emerge between these predictors and early school contexts, defined by different class and racial compositions. Implications of these results for theory and policy are examined. PMID:24223437
Application guide for AFINCH (Analysis of Flows in Networks of Channels) described by NHDPlus
Holtschlag, David J.
2009-01-01
AFINCH (Analysis of Flows in Networks of CHannels) is a computer application that can be used to generate a time series of monthly flows at stream segments (flowlines) and water yields for catchments defined in the National Hydrography Dataset Plus (NHDPlus) value-added attribute system. AFINCH provides a basis for integrating monthly flow data from streamgages, water-use data, monthly climatic data, and land-cover characteristics to estimate natural monthly water yields from catchments by user-defined regression equations. Images of monthly water yields for active streamgages are generated in AFINCH and provide a basis for detecting anomalies in water yields, which may be associated with undocumented flow diversions or augmentations. Water yields are multiplied by the drainage areas of the corresponding catchments to estimate monthly flows. Flows from catchments are accumulated downstream through the streamflow network described by the stream segments. For stream segments where streamgages are active, ratios of measured to accumulated flows are computed. These ratios are applied to upstream water yields to proportionally adjust estimated flows to match measured flows. Flow is conserved through the NHDPlus network. A time series of monthly flows can be generated for stream segments that average about 1-mile long, or monthly water yields from catchments that average about 1 square mile. Estimated monthly flows can be displayed within AFINCH, examined for nonstationarity, and tested for monotonic trends. Monthly flows also can be used to estimate flow-duration characteristics at stream segments. AFINCH generates output files of monthly flows and water yields that are compatible with ArcMap, a geographical information system analysis and display environment. Chloropleth maps of monthly water yield and flow can be generated and analyzed within ArcMap by joining NHDPlus data structures with AFINCH output. Matlab code for the AFINCH application is presented.
Computer-aided pulmonary image analysis in small animal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less
Ramasubramanian, Viswanathan; Glasser, Adrian
2015-01-01
PURPOSE To determine whether relatively low-resolution ultrasound biomicroscopy (UBM) can predict the accommodative optical response in prepresbyopic eyes as well as in a previous study of young phakic subjects, despite lower accommodative amplitudes. SETTING College of Optometry, University of Houston, Houston, USA. DESIGN Observational cross-sectional study. METHODS Static accommodative optical response was measured with infrared photorefraction and an autorefractor (WR-5100K) in subjects aged 36 to 46 years. A 35 MHz UBM device (Vumax, Sonomed Escalon) was used to image the left eye, while the right eye viewed accommodative stimuli. Custom-developed Matlab image-analysis software was used to perform automated analysis of UBM images to measure the ocular biometry parameters. The accommodative optical response was predicted from biometry parameters using linear regression, 95% confidence intervals (CIs), and 95% prediction intervals. RESULTS The study evaluated 25 subjects. Per-diopter (D) accommodative changes in anterior chamber depth (ACD), lens thickness, anterior and posterior lens radii of curvature, and anterior segment length were similar to previous values from young subjects. The standard deviations (SDs) of accommodative optical response predicted from linear regressions for UBM-measured biometry parameters were ACD, 0.15 D; lens thickness, 0.25 D; anterior lens radii of curvature, 0.09 D; posterior lens radii of curvature, 0.37 D; and anterior segment length, 0.42 D. CONCLUSIONS Ultrasound biomicroscopy parameters can, on average, predict accommodative optical response with SDs of less than 0.55 D using linear regressions and 95% CIs. Ultrasound biomicroscopy can be used to visualize and quantify accommodative biometric changes and predict accommodative optical response in prepresbyopic eyes. PMID:26049831
Safety analysis of urban arterials at the meso level.
Li, Jia; Wang, Xuesong
2017-11-01
Urban arterials form the main structure of street networks. They typically have multiple lanes, high traffic volume, and high crash frequency. Classical crash prediction models investigate the relationship between arterial characteristics and traffic safety by treating road segments and intersections as isolated units. This micro-level analysis does not work when examining urban arterial crashes because signal spacing is typically short for urban arterials, and there are interactions between intersections and road segments that classical models do not accommodate. Signal spacing also has safety effects on both intersections and road segments that classical models cannot fully account for because they allocate crashes separately to intersections and road segments. In addition, classical models do not consider the impact on arterial safety of the immediately surrounding street network pattern. This study proposes a new modeling methodology that will offer an integrated treatment of intersections and road segments by combining signalized intersections and their adjacent road segments into a single unit based on road geometric design characteristics and operational conditions. These are called meso-level units because they offer an analytical approach between micro and macro. The safety effects of signal spacing and street network pattern were estimated for this study based on 118 meso-level units obtained from 21 urban arterials in Shanghai, and were examined using CAR (conditional auto regressive) models that corrected for spatial correlation among the units within individual arterials. Results showed shorter arterial signal spacing was associated with higher total and PDO (property damage only) crashes, while arterials with a greater number of parallel roads were associated with lower total, PDO, and injury crashes. The findings from this study can be used in the traffic safety planning, design, and management of urban arterials. Copyright © 2017 Elsevier Ltd. All rights reserved.
A preliminary investigation of the relationships between historical crash and naturalistic driving.
Pande, Anurag; Chand, Sai; Saxena, Neeraj; Dixit, Vinayak; Loy, James; Wolshon, Brian; Kent, Joshua D
2017-04-01
This paper describes a project that was undertaken using naturalistic driving data collected via Global Positioning System (GPS) devices to demonstrate a proof-of-concept for proactive safety assessments of crash-prone locations. The main hypothesis for the study is that the segments where drivers have to apply hard braking (higher jerks) more frequently might be the "unsafe" segments with more crashes over a long-term. The linear referencing methodology in ArcMap was used to link the GPS data with roadway characteristic data of US Highway 101 northbound (NB) and southbound (SB) in San Luis Obispo, California. The process used to merge GPS data with quarter-mile freeway segments for traditional crash frequency analysis is also discussed in the paper. A negative binomial regression analyses showed that proportion of high magnitude jerks while decelerating on freeway segments (from the driving data) was significantly related with the long-term crash frequency of those segments. A random parameter negative binomial model with uniformly distributed parameter for ADT and a fixed parameter for jerk provided a statistically significant estimate for quarter-mile segments. The results also indicated that roadway curvature and the presence of auxiliary lane are not significantly related with crash frequency for the highway segments under consideration. The results from this exploration are promising since the data used to derive the explanatory variable(s) can be collected using most off-the-shelf GPS devices, including many smartphones. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model
2015-06-01
and rewarding me with your love every day. xx THIS PAGE INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. THE UNITED STATES ARMY RECRUITING...the relationship between the calculated SAMA potential and the actual 2014 performance. The scatterplot in Figure 8 shows a strong linear... relationship between the SAMA calculated potential and the contracting achievement for 2014, with an R-squared value of 0.871. Simple Linear Regression of
NASA Astrophysics Data System (ADS)
Qianxiang, Zhou
2012-07-01
It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.
Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z.; Chen, Ronald C.
2016-01-01
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a nonlocal external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531
Bike and run pacing on downhill segments predict Ironman triathlon relative success.
Johnson, Evan C; Pryor, J Luke; Casa, Douglas J; Belval, Luke N; Vance, James S; DeMartini, Julie K; Maresh, Carl M; Armstrong, Lawrence E
2015-01-01
Determine if performance and physiological based pacing characteristics over the varied terrain of a triathlon predicted relative bike, run, and/or overall success. Poor self-regulation of intensity during long distance (Full Iron) triathlon can manifest in adverse discontinuities in performance. Observational study of a random sample of Ironman World Championship athletes. High performing and low performing groups were established upon race completion. Participants wore global positioning system and heart rate enabled watches during the race. Percentage difference from pre-race disclosed goal pace (%off) and mean HR were calculated for nine segments of the bike and 11 segments of the run. Normalized graded running pace (accounting for changes in elevation) was computed via analysis software. Step-wise regression analyses identified segments predictive of relative success and HP and LP were compared at these segments to confirm importance. %Off of goal velocity during two downhill segments of the bike (HP: -6.8±3.2%, -14.2±2.6% versus LP: -1.2±4.2%, -5.1±11.5%; p<0.020) and %off from NGP during one downhill segment of the run (HP: 4.8±5.2% versus LP: 33.3±38.7%; p=0.033) significantly predicted relative performance. Also, HP displayed more consistency in mean HR (141±12 to 138±11 bpm) compared to LP (139±17 to 131±16 bpm; p=0.019) over the climb and descent from the turn-around point during the bike component. Athletes who maintained faster relative speeds on downhill segments, and who had smaller changes in HR between consecutive up and downhill segments were more successful relative to their goal times. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Rajab, Maher I
2011-11-01
Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.
Identifying the optimal segmentors for mass classification in mammograms
NASA Astrophysics Data System (ADS)
Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.
2015-03-01
In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros
2018-02-01
Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.
Toy, Brian C; Krishnadev, Nupura; Indaram, Maanasa; Cunningham, Denise; Cukras, Catherine A; Chew, Emily Y; Wong, Wai T
2013-09-01
To investigate the association of spontaneous drusen regression in intermediate age-related macular degeneration (AMD) with changes on fundus photography and fundus autofluorescence (FAF) imaging. Prospective observational case series. Fundus images from 58 eyes (in 58 patients) with intermediate AMD and large drusen were assessed over 2 years for areas of drusen regression that exceeded the area of circle C1 (diameter 125 μm; Age-Related Eye Disease Study grading protocol). Manual segmentation and computer-based image analysis were used to detect and delineate areas of drusen regression. Delineated regions were graded as to their appearance on fundus photographs and FAF images, and changes in FAF signal were graded manually and quantitated using automated image analysis. Drusen regression was detected in approximately half of study eyes using manual (48%) and computer-assisted (50%) techniques. At year-2, the clinical appearance of areas of drusen regression on fundus photography was mostly unremarkable, with a majority of eyes (71%) demonstrating no detectable clinical abnormalities, and the remainder (29%) showing minor pigmentary changes. However, drusen regression areas were associated with local changes in FAF that were significantly more prominent than changes on fundus photography. A majority of eyes (64%-66%) demonstrated a predominant decrease in overall FAF signal, while 14%-21% of eyes demonstrated a predominant increase in overall FAF signal. FAF imaging demonstrated that drusen regression in intermediate AMD was often accompanied by changes in local autofluorescence signal. Drusen regression may be associated with concurrent structural and physiologic changes in the outer retina. Published by Elsevier Inc.
MicroCT angiography detects vascular formation and regression in skin wound healing
Urao, Norifumi; Okonkwo, Uzoagu A.; Fang, Milie M.; Zhuang, Zhen W.; Koh, Timothy J.; DiPietro, Luisa A.
2016-01-01
Properly regulated angiogenesis and arteriogenesis are essential for effective wound healing. Tissue injury induces robust new vessel formation and subsequent vessel maturation, which involves vessel regression and remodeling. Although formation of functional vasculature is essential for healing, alterations in vascular structure over the time course of skin wound healing are not well understood. Here, using high-resolution ex vivo X-ray micro-computed tomography (microCT), we describe the vascular network during healing of skin excisional wounds with highly detailed three-dimensional (3D) reconstructed images and associated quantitative analysis. We found that relative vessel volume, surface area and branching number are significantly decreased in wounds from day 7 to day 14 and 21. Segmentation and skeletonization analysis of selected branches from high-resolution images as small as 2.5 μm voxel size show that branching orders are decreased in the wound vessels during healing. In histological analysis, we found that the contrast agent fills mainly arterioles, but not small capillaries nor large veins. In summary, high-resolution microCT revealed dynamic alterations of vessel structures during wound healing. This technique may be useful as a key tool in the study of the formation and regression of wound vessels. PMID:27009591
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Xiubin; Gao, Yaozong; Shen, Dinggang, E-mail: dgshen@med.unc.edu
2015-05-15
Purpose: In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. Methods: To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as amore » detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. Results: The experimental results on 330 images of 24 patients show the effectiveness of the authors’ proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors’ method achieves the best performance. Conclusions: By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors’ proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.« less
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M
2017-05-01
To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.
Automated segmentation of serous pigment epithelium detachment in SD-OCT images
NASA Astrophysics Data System (ADS)
Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian
2015-03-01
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.
Assessment of LVEF using a new 16-segment wall motion score in echocardiography.
Lebeau, Real; Serri, Karim; Lorenzo, Maria Di; Sauvé, Claude; Le, Van Hoai Viet; Soulières, Vicky; El-Rayes, Malak; Pagé, Maude; Zaïani, Chimène; Garot, Jérôme; Poulin, Frédéric
2018-06-01
Simpson biplane method and 3D by transthoracic echocardiography (TTE), radionuclide angiography (RNA) and cardiac magnetic resonance imaging (CMR) are the most accepted techniques for left ventricular ejection fraction (LVEF) assessment. Wall motion score index (WMSI) by TTE is an accepted complement. However, the conversion from WMSI to LVEF is obtained through a regression equation, which may limit its use. In this retrospective study, we aimed to validate a new method to derive LVEF from the wall motion score in 95 patients. The new score consisted of attributing a segmental EF to each LV segment based on the wall motion score and averaging all 16 segmental EF into a global LVEF. This segmental EF score was calculated on TTE in 95 patients, and RNA was used as the reference LVEF method. LVEF using the new segmental EF 15-40-65 score on TTE was compared to the reference methods using linear regression and Bland-Altman analyses. The median LVEF was 45% (interquartile range 32-53%; range from 15 to 65%). Our new segmental EF 15-40-65 score derived on TTE correlated strongly with RNA-LVEF ( r = 0.97). Overall, the new score resulted in good agreement of LVEF compared to RNA (mean bias 0.61%). The standard deviations (s.d.s) of the distributions of inter-method difference for the comparison of the new score with RNA were 6.2%, indicating good precision. LVEF assessment using segmental EF derived from the wall motion score applied to each of the 16 LV segments has excellent correlation and agreement with a reference method. © 2018 The authors.
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.
2015-01-01
Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600
Ying, Jinwei; Teng, Honglin; Qian, Yunfan; Hu, Yingying; Wen, Tianyong; Ruan, Dike; Zhu, Minyu
2018-01-01
Background Ossification of the nuchal ligament (ONL) caused by chronic injury to the nuchal ligament (NL) is very common in instability-related cervical disorders. Purpose To determine possible correlations between ONL, sagittal alignment, and segmental stability of the cervical spine. Material and Methods Seventy-three patients with cervical spondylotic myelopathy (CSM) and ONL (ONL group) and 118 patients with CSM only (control group) were recruited. Radiographic data included the characteristics of ONL, sagittal alignment and segmental stability, and ossification of the posterior longitudinal ligament (OPLL). We performed comparisons in terms of radiographic parameters between the ONL and control groups. The correlations between ONL size, cervical sagittal alignment, and segmental stability were analyzed. Multivariate logistic regression was used to identify the independent risk factors of the development of ONL. Results C2-C7 sagittal vertical axis (SVA), T1 slope (T1S), T1S minus cervical lordosis (T1S-CL) on the lateral plain, angular displacement (AD), and horizontal displacement (HD) on the dynamic radiograph increased significantly in the ONL group compared with the control group. The size of ONL significantly correlated with C2-C7 SVA, T1S, AD, and HD. The incidence of ONL was higher in patients with OPLL and segmental instability. Cervical instability, sagittal malalignment, and OPLL were independent predictors of the development of ONL through multivariate analysis. Conclusion Patients with ONL are more likely to have abnormal sagittal alignment and instability of the cervical spine. Thus, increased awareness and appreciation of this often-overlooked radiographic finding is warranted during diagnosis and treatment of instability-related cervical pathologies and injuries.
Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T
2015-12-01
To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zellars, Richard, E-mail: zellari@jhmi.edu; Bravo, Paco E.; Tryggestad, Erik
2014-03-15
Purpose: Cardiac muscle perfusion, as determined by single-photon emission computed tomography (SPECT), decreases after breast and/or chest wall (BCW) irradiation. The active breathing coordinator (ABC) enables radiation delivery when the BCW is farther from the heart, thereby decreasing cardiac exposure. We hypothesized that ABC would prevent radiation-induced cardiac toxicity and conducted a randomized controlled trial evaluating myocardial perfusion changes after radiation for left-sided breast cancer with or without ABC. Methods and Materials: Stages I to III left breast cancer patients requiring adjuvant radiation therapy (XRT) were randomized to ABC or No-ABC. Myocardial perfusion was evaluated by SPECT scans (before andmore » 6 months after BCW radiation) using 2 methods: (1) fully automated quantitative polar mapping; and (2) semiquantitative visual assessment. The left ventricle was divided into 20 segments for the polar map and 17 segments for the visual method. Segments were grouped by anatomical rings (apical, mid, basal) or by coronary artery distribution. For the visual method, 2 nuclear medicine physicians, blinded to treatment groups, scored each segment's perfusion. Scores were analyzed with nonparametric tests and linear regression. Results: Between 2006 and 2010, 57 patients were enrolled and 43 were available for analysis. The cohorts were well matched. The apical and left anterior descending coronary artery segments had significant decreases in perfusion on SPECT scans in both ABC and No-ABC cohorts. In unadjusted and adjusted analyses, controlling for pretreatment perfusion score, age, and chemotherapy, ABC was not significantly associated with prevention of perfusion deficits. Conclusions: In this randomized controlled trial, ABC does not appear to prevent radiation-induced cardiac perfusion deficits.« less
Estimation of stature from the foot and its segments in a sub-adult female population of North India
2011-01-01
Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433
Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam
2011-11-21
Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.
Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D
2018-02-01
OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.
Marwick, Charis A; Guthrie, Bruce; Pringle, Jan E C; Evans, Josie M M; Nathwani, Dilip; Donnan, Peter T; Davey, Peter G
2014-12-01
Antibiotic administration to inpatients developing sepsis in general hospital wards was frequently delayed. We aimed to reproduce improvements in sepsis management reported in other settings. Ninewells Hospital, an 860-bed teaching hospital with quality improvement (QI) experience, in Scotland, UK. The intervention wards were 22 medical, surgical and orthopaedic inpatient wards. A multifaceted intervention, informed by baseline process data and questionnaires and interviews with junior doctors, evaluated using segmented regression analysis of interrupted time series (ITS) data. MEASURES FOR IMPROVEMENT: Primary outcome measure: antibiotic administration within 4 hours of sepsis onset. Secondary measures: antibiotics within 8 hours; mean and median time to antibiotics; medical review within 30 min for patients with a standardised early warning system score .4; blood cultures taken before antibiotic administration; blood lactate level measured. The intervention included printed and electronic clinical guidance, educational clinical team meetings including baseline performance data, audit and monthly feedback on performance. Performance against all study outcome measures improved postintervention but differences were small and ITS analysis did not attribute the observed changes to the intervention. Rigorous analysis of this carefully designed improvement intervention could not confirm significant effects. Statistical analysis of many such studies is inadequate, and there is insufficient reporting of negative studies. In light of recent evidence, involving senior clinical team members in verbal feedback and action planning may have made the intervention more effective. Our focus on rigorous intervention design and evaluation was at the expense of iterative refinement, which likely reduced the effect. This highlights the necessary, but challenging, requirement to invest in all three components for effective QI.
Li, Pu; Qin, Chao; Cao, Qiang; Li, Jie; Lv, Qiang; Meng, Xiaoxin; Ju, Xiaobing; Tang, Lijun; Shao, Pengfei
2016-10-01
To evaluate the feasibility and efficiency of laparoscopic partial nephrectomy (LPN) with segmental renal artery clamping, and to analyse the factors affecting postoperative renal function. We conducted a retrospective analysis of 466 consecutive patients undergoing LPN using main renal artery clamping (group A, n = 152) or segmental artery clamping (group B, n = 314) between September 2007 and July 2015 in our department. Blood loss, operating time, warm ischaemia time (WIT) and renal function were compared between groups. Univariable and multivariable linear regression analyses were applied to assess the correlations of selected variables with postoperative glomerular filtration rate (GFR) reduction. Volumetric data and estimated GFR of a subset of 60 patients in group B were compared with GFR to evaluate the correlation between these functional variables and preserved renal function after LPN. The novel technique slightly increased operating time, WIT and intra-operative blood loss (P < 0.001), while it provided better postoperative renal function (P < 0.001) compared with the conventional technique. The blocking method and tumour characteristics were independent factors affecting GFR reduction, while WIT was not an independent factor. Correlation analysis showed that estimated GFR presented better correlation with GFR compared with kidney volume (R(2) = 0.794 cf. R(2) = 0.199) in predicting renal function after LPN. LPN with segmental artery clamping minimizes warm ischaemia injury and provides better early postoperative renal function compared with clamping the main renal artery. Kidney volume has a significantly inferior role compared with eGFR in predicting preserved renal function. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
MacBride-Stewart, Sean; Marwick, Charis; Houston, Neil; Watt, Iain; Patton, Andrea; Guthrie, Bruce
2017-01-01
Background It is uncertain whether improvements in primary care high-risk prescribing seen in research trials can be realised in the real-world setting. Aim To evaluate the impact of a 1-year system-wide phase IV prescribing safety improvement initiative, which included education, feedback, support to identify patients to review, and small financial incentives. Design and setting An interrupted time series analysis of targeted high-risk prescribing in all 56 general practices in NHS Forth Valley, Scotland, was performed. In 2013–2014, this focused on high-risk non-steroidal anti-inflammatory drugs (NSAIDs) in older people and NSAIDs with oral anticoagulants; in 2014–2015, it focused on antipsychotics in older people. Method The primary analysis used segmented regression analysis to estimate impact at the end of the intervention, and 12 months later. The secondary analysis used difference-in-difference methods to compare Forth Valley changes with those in NHS Greater Glasgow and Clyde (GGC). Results In the primary analysis, downward trends for all three NSAID measures that were existent before the intervention statistically significantly steepened following implementation of the intervention. At the end of the intervention period, 1221 fewer patients than expected were prescribed a high-risk NSAID. In contrast, antipsychotic prescribing in older people increased slowly over time, with no intervention-associated change. In the secondary analysis, reductions at the end of the intervention period in all three NSAID measures were statistically significantly greater in NHS Forth Valley than in NHS GGC, but only significantly greater for two of these measures 12 months after the intervention finished. Conclusion There were substantial and sustained reductions in the high-risk prescribing of NSAIDs, although with some waning of effect 12 months after the intervention ceased. The same intervention had no effect on antipsychotic prescribing in older people. PMID:28347986
MacBride-Stewart, Sean; Marwick, Charis; Houston, Neil; Watt, Iain; Patton, Andrea; Guthrie, Bruce
2017-05-01
It is uncertain whether improvements in primary care high-risk prescribing seen in research trials can be realised in the real-world setting. To evaluate the impact of a 1-year system-wide phase IV prescribing safety improvement initiative, which included education, feedback, support to identify patients to review, and small financial incentives. An interrupted time series analysis of targeted high-risk prescribing in all 56 general practices in NHS Forth Valley, Scotland, was performed. In 2013-2014, this focused on high-risk non-steroidal anti-inflammatory drugs (NSAIDs) in older people and NSAIDs with oral anticoagulants; in 2014-2015, it focused on antipsychotics in older people. The primary analysis used segmented regression analysis to estimate impact at the end of the intervention, and 12 months later. The secondary analysis used difference-in-difference methods to compare Forth Valley changes with those in NHS Greater Glasgow and Clyde (GGC). In the primary analysis, downward trends for all three NSAID measures that were existent before the intervention statistically significantly steepened following implementation of the intervention. At the end of the intervention period, 1221 fewer patients than expected were prescribed a high-risk NSAID. In contrast, antipsychotic prescribing in older people increased slowly over time, with no intervention-associated change. In the secondary analysis, reductions at the end of the intervention period in all three NSAID measures were statistically significantly greater in NHS Forth Valley than in NHS GGC, but only significantly greater for two of these measures 12 months after the intervention finished. There were substantial and sustained reductions in the high-risk prescribing of NSAIDs, although with some waning of effect 12 months after the intervention ceased. The same intervention had no effect on antipsychotic prescribing in older people. © British Journal of General Practice 2017.
Contour-Driven Atlas-Based Segmentation
Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina
2016-01-01
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202
Minet, L; Gehr, R; Hatzopoulou, M
2017-11-01
The development of reliable measures of exposure to traffic-related air pollution is crucial for the evaluation of the health effects of transportation. Land-use regression (LUR) techniques have been widely used for the development of exposure surfaces, however these surfaces are often highly sensitive to the data collected. With the rise of inexpensive air pollution sensors paired with GPS devices, we witness the emergence of mobile data collection protocols. For the same urban area, can we achieve a 'universal' model irrespective of the number of locations and sampling visits? Can we trade the temporal representation of fixed-point sampling for a larger spatial extent afforded by mobile monitoring? This study highlights the challenges of short-term mobile sampling campaigns in terms of the resulting exposure surfaces. A mobile monitoring campaign was conducted in 2015 in Montreal; nitrogen dioxide (NO 2 ) levels at 1395 road segments were measured under repeated visits. We developed LUR models based on sub-segments, categorized in terms of the number of visits per road segment. We observe that LUR models were highly sensitive to the number of road segments and to the number of visits per road segment. The associated exposure surfaces were also highly dissimilar. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
Waltimo-Sirén, Janna; Laatikainen, Tuula; Haukka, Jari; Ekholm, Marja
2016-01-01
Objectives: Dental panoramic tomography is the most frequent examination among 7–12-year olds, according to the Radiation Safety and Nuclear Authority of Finland. At those ages, dental panoramic tomographs (DPTs) are mostly obtained for orthodontic reasons. Children's dose reduction by trimming the field size to the area of interest is important because of their high radiosensitivity. Yet, the majority of DPTs in this age group are still taken by using an adult programme and never by using a segmented programme. The purpose of the present study was to raise the awareness of dental staff with respect to children's radiation safety, to increase the application of segmented and child DPT programmes by further educating the whole dental team and to evaluate the outcome of the educational intervention. Methods: A five-step intervention programme, focusing on DPT field limitation possibilities, was carried out in community-based dental care as a part of mandatory continuing education in radiation protection. Application of segmented and child DPT programmes was thereafter prospectively followed up during a 1-year period and compared with our similar data from 2010 using a logistic regression analysis. Results: Application of the child programme increased by 9% and the segmented programme by 2%, reaching statistical significance (odds ratios 1.68; 95% confidence interval 1.23–2.30; p-value < 0.001). The number of repeated exposures remained at an acceptable level. The segmented DPTs were most frequently taken from the maxillary lateral incisor–canine area. Conclusions: The educational intervention resulted in improvement of radiological practice in respect to radiation safety of children during dental panoramic tomography. Segmented and child DPT programmes can be applied successfully in dental practice for children. PMID:27142159
Pakbaznejad Esmaeili, Elmira; Waltimo-Sirén, Janna; Laatikainen, Tuula; Haukka, Jari; Ekholm, Marja
2016-05-23
Dental panoramic tomography is the most frequent examination among 7-12-year olds, according to the Radiation Safety and Nuclear Authority of Finland. At those ages, dental panoramic tomographs (DPTs) are mostly obtained for orthodontic reasons. Children's dose reduction by trimming the field size to the area of interest is important because of their high radiosensitivity. Yet, the majority of DPTs in this age group are still taken by using an adult programme and never by using a segmented programme. The purpose of the present study was to raise the awareness of dental staff with respect to children's radiation safety, to increase the application of segmented and child DPT programmes by further educating the whole dental team and to evaluate the outcome of the educational intervention. A five-step intervention programme, focusing on DPT field limitation possibilities, was carried out in community-based dental care as a part of mandatory continuing education in radiation protection. Application of segmented and child DPT programmes was thereafter prospectively followed up during a 1-year period and compared with our similar data from 2010 using a logistic regression analysis. Application of the child programme increased by 9% and the segmented programme by 2%, reaching statistical significance (odds ratios 1.68; 95% confidence interval 1.23-2.30; p-value < 0.001). The number of repeated exposures remained at an acceptable level. The segmented DPTs were most frequently taken from the maxillary lateral incisor-canine area. The educational intervention resulted in improvement of radiological practice in respect to radiation safety of children during dental panoramic tomography. Segmented and child DPT programmes can be applied successfully in dental practice for children.
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Kim, Jongin; Park, Hyeong-jun
2016-01-01
The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128
Nogami, Yoshie; Ishizu, Tomoko; Atsumi, Akiko; Yamamoto, Masayoshi; Kawamura, Ryo; Seo, Yoshihiro; Aonuma, Kazutaka
2013-03-01
Recently developed vector flow mapping (VFM) enables evaluation of local flow dynamics without angle dependency. This study used VFM to evaluate quantitatively the index of intraventricular haemodynamic kinetic energy in patients with left ventricular (LV) diastolic dysfunction and to compare those with normal subjects. We studied 25 patients with estimated high left atrial (LA) pressure (pseudonormal: PN group) and 36 normal subjects (control group). Left ventricle was divided into basal, mid, and apical segments. Intraventricular haemodynamic energy was evaluated in the dimension of speed, and it was defined as the kinetic energy index. We calculated this index and created time-energy index curves. The time interval from electrocardiogram (ECG) R wave to peak index was measured, and time differences of the peak index between basal and other segments were defined as ΔT-mid and ΔT-apex. In both groups, early diastolic peak kinetic energy index in mid and apical segments was significantly lower than that in the basal segment. Time to peak index did not differ in apex, mid, and basal segments in the control group but was significantly longer in the apex than that in the basal segment in the PN group. ΔT-mid and ΔT-apex were significantly larger in the PN group than the control group. Multiple regression analysis showed sphericity index, E/E' to be significant independent variables determining ΔT apex. Retarded apical kinetic energy fluid dynamics were detected using VFM and were closely associated with LV spherical remodelling in patients with high LA pressure.
The effect of obesity and gender on body segment parameters in older adults
Chambers, April J.; Sukits, Alison L.; McCrory, Jean L.; Cham, Rakié
2010-01-01
Background Anthropometry is a necessary aspect of aging-related research, especially in biomechanics and injury prevention. Little information is available on inertial parameters in the geriatric population that account for gender and obesity effects. The goal of this study was to report body segment parameters in adults aged 65 years and older, and to investigate the impact of aging, gender and obesity. Methods Eighty-three healthy old (65–75 yrs) and elderly (>75 yrs) adults were recruited to represent a range of body types. Participants underwent a whole body dual energy x-ray absorptiometry scan. Analysis was limited to segment mass, length, longitudinal center of mass position, and frontal plane radius of gyration. A mixed-linear regression model was performed using gender, obesity, age group and two-way and three-way interactions (α=0.05). Findings Mass distribution varied with obesity and gender. Males had greater trunk and upper extremity mass while females had a higher lower extremity mass. In general, obese elderly adults had significantly greater trunk segment mass with less thigh and shank segment mass than all others. Gender and obesity effects were found in center of mass and radius of gyration. Non-obese individuals possessed a more distal thigh and shank center of mass than obese. Interestingly, females had more distal trunk center of mass than males. Interpretation Age, obesity and gender have a significant impact on segment mass, center of mass and radius of gyration in old and elderly adults. This study underlines the need to consider age, obesity and gender when utilizing anthropometric data sets. PMID:20005028
MicroCT angiography detects vascular formation and regression in skin wound healing.
Urao, Norifumi; Okonkwo, Uzoagu A; Fang, Milie M; Zhuang, Zhen W; Koh, Timothy J; DiPietro, Luisa A
2016-07-01
Properly regulated angiogenesis and arteriogenesis are essential for effective wound healing. Tissue injury induces robust new vessel formation and subsequent vessel maturation, which involves vessel regression and remodeling. Although formation of functional vasculature is essential for healing, alterations in vascular structure over the time course of skin wound healing are not well understood. Here, using high-resolution ex vivo X-ray micro-computed tomography (microCT), we describe the vascular network during healing of skin excisional wounds with highly detailed three-dimensional (3D) reconstructed images and associated quantitative analysis. We found that relative vessel volume, surface area and branching number are significantly decreased in wounds from day 7 to days 14 and 21. Segmentation and skeletonization analysis of selected branches from high-resolution images as small as 2.5μm voxel size show that branching orders are decreased in the wound vessels during healing. In histological analysis, we found that the contrast agent fills mainly arterioles, but not small capillaries nor large veins. In summary, high-resolution microCT revealed dynamic alterations of vessel structures during wound healing. This technique may be useful as a key tool in the study of the formation and regression of wound vessels. Copyright © 2016 Elsevier Inc. All rights reserved.
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Impact of flavor attributes on consumer liking of Swiss cheese.
Liggett, R E; Drake, M A; Delwiche, J F
2008-02-01
Although Swiss cheese is growing in popularity, no research has examined what flavor characteristics consumers desire in Swiss cheese, which was the main objective of this study. To this end, a large group of commercially available Swiss-type cheeses (10 domestic Swiss cheeses, 4 domestic Baby Swiss cheeses, and one imported Swiss Emmenthal) were assessed both by 12 trained panelists for flavor and feeling factors and by 101 consumers for overall liking. In addition, a separate panel of 24 consumers rated the same cheeses for dissimilarity. On the basis of liking ratings, the 101 consumers were segmented by cluster analysis into 2 groups: nondistinguishers (n = 40) and varying responders (n = 61). Partial least squares regression, a statistical modeling technique that relates 2 data sets (in this case, a set of descriptive analysis data and a set of consumer liking data), was used to determine which flavor attributes assessed by the trained panel were important variables in overall liking of the cheeses for the varying responders. The model explained 93% of the liking variance on 3 normally distributed components and had 49% predictability. Diacetyl, whey, milk fat, and umami were found to be drivers of liking, whereas cabbage, cooked, and vinegar were drivers of disliking. Nutty flavor was not particularly important to liking and it was present in only 2 of the cheeses. The dissimilarity ratings were combined with the liking ratings of both segments and analyzed by probabilistic multidimensional scaling. The ideals of each segment completely overlapped, with the variance of the varying responders being smaller than the variance of the non-distinguishers. This model indicated that the Baby Swiss cheeses were closer to the consumers' ideals than were the other cheeses. Taken together, the 2 models suggest that the partial least squares regression failed to capture one or more attributes that contribute to consumer acceptance, although the descriptive analysis of flavor and feeling factors was able to account for 93% of the variance in the liking ratings. These findings indicate the flavor characteristics Swiss cheese producers should optimize, and minimize, to create cheeses that best match consumer desires.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimation of total Length of Femur From Its Fragments in South Indian Population.
Solan, Shweta; Kulkarni, Roopa
2013-10-01
Establishment of identity of deceased person also assumes a great medicolegal importance. To establish the identity of a person, stature is one of the criteria. To know stature of individual, length of long bones is needed. To determine the lengths of the femoral fragments and to compare with the total length of femur in south Indian population, which will help to estimate the stature of the individual using standard regression formulae. A number of 150, 72 left and 78 right adult fully ossified dry processed femora were taken. The femur bone was divided into five segments by taking predetermined points. Length of five segments and maximum length of femur were measured to the nearest millimeter. The values were obtained in cm [mean±S.D.] and the mean total length of femora on left and right side was measured. The proportion of segments to the total length was also calculated which will help for the stature estimation using standard regression formulae. The mean total length of femora on left side was 43.54 ± 2.7 and on right side it was 43.42 ± 2.4. The measurements of the segments-1, 2, 3, 4 and 5 were 8.06± 0.71, 8.25± 1.24, 10.35 ± 2.21, 13.94 ± 1.93 and 2.77 ± 0.53 on left side and 8.09 ± 0.70, 8.30 ± 1.34, 10.44 ± 1.91, 13.50 ± 1.54 and 3.09 ± 0.41 on right side of femur. The sample size was 150, 72 left and 78 right and 'p' value of all the segments was significant (‹0.001). When comparison was made between segments of right and left femora, the 'p' value of segment-5 was found to be ‹0.001. Comparison between different segments of femur showed significance in all the segments.
Anatomy guided automated SPECT renal seed point estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Shkarubo, Alexey N; Kuleshov, Alexander A; Chernov, Ilia V; Vetrile, Marchel S
2017-06-01
Presentation of clinical cases involving successful anterior stabilization of the C1-C2 segment in patients with invaginated C2 odontoid process and Chiari malformation type I. Clinical case description. Two patients with C2 odontoid processes invagination and Chiari malformation type I were surgically treated using the transoral approach. In both cases, anterior decompression of the upper cervical region was performed, followed by anterior stabilization of the C1-C2 segment. In 1 of the cases, this procedure was performed after posterior decompression, which led to transient regression of neurologic symptoms. In both cases, custom-made cervical plates were used for anterior stabilization of the C1-C2 segment. During the follow-up period of more than 2 years, a persistent regression of both the neurologic symptoms and Chiari malformation was observed. Anterior decompression followed by anterior stabilization of the C1-C2 segment is a novel and promising approach to treating Chiari malformation type I in association with C2 odontoid process invagination. Copyright © 2017 Elsevier Inc. All rights reserved.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Tobit analysis of vehicle accident rates on interstate highways.
Anastasopoulos, Panagiotis Ch; Tarko, Andrew P; Mannering, Fred L
2008-03-01
There has been an abundance of research that has used Poisson models and its variants (negative binomial and zero-inflated models) to improve our understanding of the factors that affect accident frequencies on roadway segments. This study explores the application of an alternate method, tobit regression, by viewing vehicle accident rates directly (instead of frequencies) as a continuous variable that is left-censored at zero. Using data from vehicle accidents on Indiana interstates, the estimation results show that many factors relating to pavement condition, roadway geometrics and traffic characteristics significantly affect vehicle accident rates.
Nistal-Nuño, Beatriz
2017-09-01
In Chile, a new law introduced in March 2012 decreased the legal blood alcohol concentration (BAC) limit for driving while impaired from 1 to 0.8 g/l and the legal BAC limit for driving under the influence of alcohol from 0.5 to 0.3 g/l. The goal is to assess the impact of this new law on mortality and morbidity outcomes in Chile. A review of national databases in Chile was conducted from January 2003 to December 2014. Segmented regression analysis of interrupted time series was used for analyzing the data. In a series of multivariable linear regression models, the change in intercept and slope in the monthly incidence rate of traffic deaths and injuries and association with alcohol per 100,000 inhabitants was estimated from pre-intervention to postintervention, while controlling for secular changes. In nested regression models, potential confounding seasonal effects were accounted for. All analyses were performed at a two-sided significance level of 0.05. Immediate level drops in all the monthly rates were observed after the law from the end of the prelaw period in the majority of models and in all the de-seasonalized models, although statistical significance was reached only in the model for injures related to alcohol. After the law, the estimated monthly rate dropped abruptly by -0.869 for injuries related to alcohol and by -0.859 adjusting for seasonality (P < 0.001). Regarding the postlaw long-term trends, it was evidenced a steeper decreasing trend after the law in the models for deaths related to alcohol, although these differences were not statistically significant. A strong evidence of a reduction in traffic injuries related to alcohol was found following the law in Chile. Although insufficient evidence was found of a statistically significant effect for the beneficial effects seen on deaths and overall injuries, potential clinically important effects cannot be ruled out. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Tsai, Yu Hsin; Stow, Douglas; Weeks, John
2013-01-01
The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The geometric quality of TM film and digital products is evaluated by making selective photomeasurements and by measuring the coordinates of known features on both the TM products and map products. These paired observations are related using a standard linear least squares regression approach. Using regression equations and coefficients developed from 225 (TM film product) and 20 (TM digital product) control points, map coordinates of test points are predicted. The residual error vectors and analysis of variance (ANOVA) were performed on the east and north residual using nine image segments (blocks) as treatments. Based on the root mean square error of the 223 (TM film product) and 22 (TM digital product) test points, users of TM data expect the planimetric accuracy of mapped points to be within 91 meters and within 117 meters for the film products, and to be within 12 meters and within 14 meters for the digital products.
Kim, So-Ra; Kwak, Doo-Ahn; Lee, Woo-Kyun; oLee, Woo-Kyun; Son, Yowhan; Bae, Sang-Won; Kim, Choonsig; Yoo, Seongjin
2010-07-01
The objective of this study was to estimate the carbon storage capacity of Pinus densiflora stands using remotely sensed data by combining digital aerial photography with light detection and ranging (LiDAR) data. A digital canopy model (DCM), generated from the LiDAR data, was combined with aerial photography for segmenting crowns of individual trees. To eliminate errors in over and under-segmentation, the combined image was smoothed using a Gaussian filtering method. The processed image was then segmented into individual trees using a marker-controlled watershed segmentation method. After measuring the crown area from the segmented individual trees, the individual tree diameter at breast height (DBH) was estimated using a regression function developed from the relationship observed between the field-measured DBH and crown area. The above ground biomass of individual trees could be calculated by an image-derived DBH using a regression function developed by the Korea Forest Research Institute. The carbon storage, based on individual trees, was estimated by simple multiplication using the carbon conversion index (0.5), as suggested in guidelines from the Intergovernmental Panel on Climate Change. The mean carbon storage per individual tree was estimated and then compared with the field-measured value. This study suggested that the biomass and carbon storage in a large forest area can be effectively estimated using aerial photographs and LiDAR data.
An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.
Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein
2017-12-22
The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.
NASA Astrophysics Data System (ADS)
Tustison, Nicholas J.; Contrella, Benjamin; Altes, Talissa A.; Avants, Brian B.; de Lange, Eduard E.; Mugler, John P.
2013-03-01
The utitlity of pulmonary functional imaging techniques, such as hyperpolarized 3He MRI, has encouraged their inclusion in research studies for longitudinal assessment of disease progression and the study of treatment effects. We present methodology for performing voxelwise statistical analysis of ventilation maps derived from hyper polarized 3He MRI which incorporates multivariate template construction using simultaneous acquisition of IH and 3He images. Additional processing steps include intensity normalization, bias correction, 4-D longitudinal segmentation, and generation of expected ventilation maps prior to voxelwise regression analysis. Analysis is demonstrated on a cohort of eight individuals with diagnosed cystic fibrosis (CF) undergoing treatment imaged five times every two weeks with a prescribed treatment schedule.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
NASA Astrophysics Data System (ADS)
Pan, F.; Huang, X.; Chen, X.
2015-12-01
Radiative kernel method has been validated and widely used in the study of climate feedbacks. This study uses spectrally resolved longwave radiative kernels to examine the short-term water vapor feedbacks associated with the ENSO cycles. Using a 500-year GFDL CM3 and a 100-year NCAR CCSM4 pre-industry control simulation, we have constructed two sets of longwave spectral radiative kernels. We then composite El Niño, La Niña and ENSO-neutral states and estimate the water vapor feedbacks associated with the El Niño and La Niña phases of ENSO cycles in both simulations. Similar analysis is also applied to 35-year (1979-2014) ECMWF ERA-interim reanalysis data, which is deemed as observational results here. When modeled and observed broadband feedbacks are compared to each other, they show similar geographic patterns but with noticeable discrepancies in the contrast between the tropics and extra-tropics. Especially, in El Niño phase, the feedback estimated from reanalysis is much greater than those from the model simulations. Considering the observational data span, we carry out a sensitivity test to explore the variability of feedback-deriving using 35-year data. To do so, we calculate the water vapor feedback within every 35-year segment of the GFDL CM3 control run by two methods: one is to composite El Nino or La Nina phases as mentioned above and the other is to regressing the TOA flux perturbation caused by water vapor change (δR_H2O) against the global-mean surface temperature anomaly. We find that the short-term feedback strengths derived from composite method can change considerably from one segment to another segment, while the feedbacks by regression method are less sensitive to the choice of segment and their strengths are also much smaller than those from composite analysis. This study suggests that caution is warranted in order to infer long-term feedbacks from a few decades of observations. When spectral details of the global-mean feedbacks are examined, more inconsistencies can be revealed in many spectral bands, especially H2O continuum absorption bands and window regions. These discrepancies can be attributed back to differences in observed and modeled water vapor profiles in responses to tropical SST.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Chizewski, Michael G; Chiu, Loren Z F
2012-05-01
Joint angle is the relative rotation between two segments where one is a reference and assumed to be non-moving. However, rotation of the reference segment will influence the system's spatial orientation and joint angle. The purpose of this investigation was to determine the contribution of leg and calcaneal rotations to ankle rotation in a weight-bearing task. Forty-eight individuals performed partial squats recorded using a 3D motion capture system. Markers on the calcaneus and leg were used to model leg and calcaneal segment, and ankle joint rotations. Multiple linear regression was used to determine the contribution of leg and calcaneal segment rotations to ankle joint dorsiflexion. Regression models for left (R(2)=0.97) and right (R(2)=0.97) ankle dorsiflexion were significant. Sagittal plane leg rotation had a positive influence (left: β=1.411; right: β=1.418) while sagittal plane calcaneal rotation had a negative influence (left: β=-0.573; right: β=-0.650) on ankle dorsiflexion. Sagittal plane rotations of the leg and calcaneus were positively correlated (left: r=0.84, P<0.001; right: r=0.80, P<0.001). During a partial squat, the calcaneus rotates forward. Simultaneous forward calcaneal rotation with ankle dorsiflexion reduces total ankle dorsiflexion angle. Rear foot posture is reoriented during a partial squat, allowing greater leg rotation in the sagittal plane. Segment rotations may provide greater insight into movement mechanics that cannot be explained via joint rotations alone. Copyright © 2012 Elsevier B.V. All rights reserved.
Oliveira, Paula Duarte de; Wehrmeister, Fernando C; Pérez-Padilla, Rogelio; Gonçalves, Helen; Assunção, Maria Cecília F; Horta, Bernardo Lessa; Gigante, Denise P; Barros, Fernando C; Menezes, Ana Maria Baptista
Overweight/obesity has been reported to worsen pulmonary function (PF). This study aimed to examine the association between PF and several body composition (BC) measures in two population-based cohorts. We performed a cross-sectional analysis of individuals aged 18 and 30 years from two Pelotas Birth Cohorts in southern Brazil. PF was assessed by spirometry. Body measures that were collected included body mass index, waist circumference, skinfold thickness, percentages of total and segmented (trunk, arms and legs) fat mass (FM) and total fat-free mass (FFM). FM and FFM were measured by air-displacement plethysmography (BODPOD) and by dual-energy x-ray absorptiometry (DXA). Associations were verified through linear regressions stratified by sex, and adjusted for weight, height, skin color, and socioeconomic, behavioral, and perinatal variables. A total of 7347 individuals were included in the analyses (3438 and 3909 at 30 and 18 years, respectively). Most BC measures showed a significant positive association between PF and FFM, and a negative association with FM. For each additional percentage point of FM, measured by BOD POD, the forced vital capacity regression coefficient adjusted by height, weight and skin color, at 18 years, was -33 mL (95% CI -38, -29) and -26 mL (95% CI -30, -22), and -30 mL (95% CI -35, -25) and -19 mL (95% CI -23, -14) at 30 years, in men and women, respectively. All the BOD POD regression coefficients for FFM were the same as for the FM coefficients, but in a positive trend (p<0.001 for all associations). All measures that distinguish FM from FFM (skinfold thickness-FM estimation-BOD POD, total and segmental DXA measures-FM and FFM proportions) showed negative trends in the association of FM with PF for both ages and sexes. On the other hand, FFM showed a positive association with PF.
Nagarajan, Mahesh B; Huber, Markus B; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2013-10-01
Characterizing the dignity of breast lesions as benign or malignant is specifically difficult for small lesions; they don't exhibit typical characteristics of malignancy and are harder to segment since margins are harder to visualize. Previous attempts at using dynamic or morphologic criteria to classify small lesions (mean lesion diameter of about 1 cm) have not yielded satisfactory results. The goal of this work was to improve the classification performance in such small diagnostically challenging lesions while concurrently eliminating the need for precise lesion segmentation. To this end, we introduce a method for topological characterization of lesion enhancement patterns over time. Three Minkowski Functionals were extracted from all five post-contrast images of sixty annotated lesions on dynamic breast MRI exams. For each Minkowski Functional, topological features extracted from each post-contrast image of the lesions were combined into a high-dimensional texture feature vector. These feature vectors were classified in a machine learning task with support vector regression. For comparison, conventional Haralick texture features derived from gray-level co-occurrence matrices (GLCM) were also used. A new method for extracting thresholded GLCM features was also introduced and investigated here. The best classification performance was observed with Minkowski Functionals area and perimeter , thresholded GLCM features f8 and f9, and conventional GLCM features f4 and f6. However, both Minkowski Functionals and thresholded GLCM achieved such results without lesion segmentation while the performance of GLCM features significantly deteriorated when lesions were not segmented ( p < 0.05). This suggests that such advanced spatio-temporal characterization can improve the classification performance achieved in such small lesions, while simultaneously eliminating the need for precise segmentation.
Nyati, Lukhanyo H; Norris, Shane A; Cameron, Noel; Pettifor, John M
2006-05-01
Bones in the axial and appendicular skeletons exhibit heterogeneous growth patterns between different ethnic and sex groups. However, the influence of this differential growth on the expression of bone mineral content is not yet established. The aims of the present study were to investigate: 1) whether there are ethnic and sex differences in axial and appendicular dimensions of South African children; and 2) whether regional segment length is a better predictor of bone mass than stature. Anthropometric measurements of stature, weight, sitting height, and limb lengths were taken on 368 black and white, male and female 9-year-old children. DXA (dual-energy x-ray absorptiometry) scans of the distal ulna, distal radius, and hip and lumbar spine were also obtained. Analyses of covariance were performed to assess differences in limb lengths, adjusted for differences in stature. Multiple regression analyses were used to assess significant predictors of site-specific bone mass. Stature-adjusted means of limb lengths show that black boys have longer legs and humeri but shorter trunks than white boys. In addition, black children have longer forearms than white children, and girls have longer thighs than boys. The regression analysis demonstrated that site-specific bone mass was more strongly associated with regional segment length than stature, but this had little effect on the overall pattern of ethnic and sex differences. In conclusion, there is a differential effect of ethnicity and sex on the growth of the axial and appendicular skeletons, and regional segment length is a better predictor of site-specific bone mass than stature. Copyright 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Cheng, Yali; He, Chuanqi; Rao, Gang; Yan, Bing; Lin, Aiming; Hu, Jianmin; Yu, Yangli; Yao, Qi
2018-01-01
The Cenozoic graben systems around the tectonically stable Ordos Block, central China, have been considered as ideal places for investigating active deformation within continental rifts, such as the Weihe Graben at the southern margin with high historical seismicity (e.g., 1556 M 8.5 Huaxian great earthquake). However, previous investigations have mostly focused on the active structures in the eastern and northern parts of this graben. By contrast, in the southwest, tectonic activity along the northern margin of the Qinling Mountains has not been systematically investigated yet. In this study, based on digital elevation models (DEMs), we carried out geomorphological analysis to evaluate the relative tectonic activity along the whole South Border Fault (SBF). On the basis of field observations, high resolution DEMs acquired by small unmanned aerial vehicles (sUVA) using structure-for-motion techniques, radiocarbon (14C) age dating, we demonstrate that: 1) Tectonic activity along the SBF changes along strike, being higher in the eastern sector. 2) Seven major segment boundaries have been assigned, where the fault changes its strike and has lower tectonic activity. 3) The fault segment between the cities of Huaxian and Huayin characterized by almost pure normal slip has been active during the Holocene. We suggest that these findings would provide a basis for further investigating on the seismic risk in densely-populated Weihe Graben. Table S2. The values and classification of geomorphic indices obtained in this study. Fig. S1. Morphological features of the stream long profiles (Nos. 1-75) and corresponding SLK values. Fig. S2. Comparison of geomorphological parameters acquired from different DEMs (90-m SRTM and 30-m ASTER GDEM): (a) HI values; (b) HI linear regression; (c) mean slope of drainage basin; (d) mean slope linear regression.
The prognostic value of early repolarization with ST-segment elevation in African Americans.
Perez, Marco V; Uberoi, Abhimanyu; Jain, Nikhil A; Ashley, Euan; Turakhia, Mintu P; Froelicher, Victor
2012-04-01
Increased prevalence of classic early repolarization, defined as ST-segment elevation (STE) in the absence of acute myocardial injury, in African Americans is well established. The prognostic value of this pattern in different ethnicities remains controversial. Measure association between early repolarization and cardiovascular mortality in African Americans. The resting electrocardiograms of 45,829 patients were evaluated at the Palo Alto Veterans Affairs Hospital. Subjects with inpatient status or electrocardiographic evidence of acute myocardial infarction were excluded, leaving 29,281 subjects. ST-segment elevation, defined as an elevation of >0.1 mV at the end of the QRS, was electronically flagged and visually adjudicated by 3 observers blinded to outcomes. An association between ethnicity and early repolarization was measured by using multivariate logistic regression. We analyzed associations between early repolarization and cardiovascular mortality by using the Cox proportional hazards regression analysis. Subjects were 13% women and 13.3% African Americans, with an average age of 55 years and followed for an average of 7.6 years, resulting in 1995 cardiovascular deaths. There were 479 subjects with lateral STE and 185 with inferior STE. After adjustment for age, sex, heart rate, and coronary artery disease, African American ethnicity was associated with lateral or inferior STE (odds ratio 3.1; P = .0001). While lateral or inferior STE in non-African Americans was independently associated with cardiovascular death (hazard ratio 1.6; P = .02), it was not associated with cardiovascular death in African Americans (hazard ratio 0.75; P = .50). Although early repolarization is more prevalent in African Americans, it is not predictive of cardiovascular death in this population and may represent a distinct electrophysiologic phenomenon. Copyright © 2012 Heart Rhythm Society. All rights reserved.
A deep learning approach for the analysis of masses in mammograms with minimal user intervention.
Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P
2017-04-01
We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7. Copyright © 2017 Elsevier B.V. All rights reserved.
Satilmisoglu, Muhammet Hulusi; Ozyilmaz, Sinem Ozbay; Gul, Mehmet; Ak Yildirim, Hayriye; Kayapinar, Osman; Gokturk, Kadir; Aksu, Huseyin; Erkanli, Korhan; Eksik, Abdurrahman
2017-01-01
Purpose To determine the predictive values of D-dimer assay, Global Registry of Acute Coronary Events (GRACE) and Thrombolysis in Myocardial Infarction (TIMI) risk scores for adverse outcome in patients with non-ST-segment elevation myocardial infarction (NSTEMI). Patients and methods A total of 234 patients (mean age: 57.2±11.7 years, 75.2% were males) hospitalized with NSTEMI were included. Data on D-dimer assay, GRACE and TIMI risk scores were recorded. Logistic regression analysis was conducted to determine the risk factors predicting increased mortality. Results Median D-dimer levels were 349.5 (48.0–7,210.0) ng/mL, the average TIMI score was 3.2±1.2 and the GRACE score was 90.4±27.6 with high GRACE scores (>118) in 17.5% of patients. The GRACE score was correlated positively with both the D-dimer assay (r=0.215, P=0.01) and TIMI scores (r=0.504, P=0.000). Multivariate logistic regression analysis revealed that higher creatinine levels (odds ratio =18.465, 95% confidence interval: 1.059–322.084, P=0.046) constituted the only significant predictor of increased mortality risk with no predictive values for age, D-dimer assay, ejection fraction, glucose, hemoglobin A1c, sodium, albumin or total cholesterol levels for mortality. Conclusion Serum creatinine levels constituted the sole independent determinant of mortality risk, with no significant values for D-dimer assay, GRACE or TIMI scores for predicting the risk of mortality in NSTEMI patients. PMID:28408834
Lopilly Park, H-Y; Jung, K I; Park, C K
2012-09-01
To investigate serial changes of the Ahmed glaucoma valve (AGV) implant tube in the anterior chamber by anterior segment optical coherence tomography (AS-OCT). Patients who had received AGV implantation without complications (n=48) were included in this study. Each patient received follow-up examinations including AS-OCT at days 1 and 2, week 1, and months 1, 3, 6, and 12. Tube parameters were defined to measure its length and position. The intracameral length of the tube was from the tip of the bevel-edged tube to the sclerolimbal junction. The distance between the extremity of the tube and the anterior iris surface (T-I distance), and the angle between the tube and the posterior endothelial surface of the cornea (T-C angle) were defined. Factors that were related to tube parameters were analysed by multiple regression analysis. The mean change in tube length was -0.20 ± 0.17 mm, indicating that the tube length shortened from the initial inserted length. The mean T-I distance change was 0.11 ± 0.07 mm and the mean T-C angle change was -6.7 ± 5.6°. Uveitic glaucoma and glaucoma following penetrating keratoplasty showed the most changes in tube parameters. By multiple regression analysis, diagnosis of glaucoma including uveitic glaucoma (P=0.049) and glaucoma following penetrating keratoplasty (P=0.008) were related to the change of intracameral tube length. These results suggest that the length and position of the AGV tube changes after surgery. The change was prominent in uveitic glaucoma and glaucoma following penetrating keratoplasty.
Automatic initialization and quality control of large-scale cardiac MRI segmentations.
Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F
2018-01-01
Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Kim, Eun Kyoung; Park, Hae-Young Lopilly; Park, Chan Kee
2017-01-01
To evaluate the changes of retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), and ganglion cell-inner plexiform layer (GCIPL) thicknesses and compare structure-function relationships of 4 retinal layers using spectral-domain optical coherence tomography (SD-OCT) in macular region of glaucoma patients. In cross-sectional study, a total of 85 eyes with pre-perimetric to advanced glaucoma and 26 normal controls were enrolled. The glaucomatous eyes were subdivided into three groups according to the severity of visual field defect: a preperimetric glaucoma group, an early glaucoma group, and a moderate to advanced glaucoma group. RNFL, GCL, IPL, and GCIPL thicknesses were measured at the level of the macula by the Spectralis (Heidelberg Engineering, Heidelberg, Germany) SD-OCT with automated segmentation software. For functional evaluation, corresponding mean sensitivity (MS) values were measured using 24-2 standard automated perimetry (SAP). RNFL, GCL, IPL, and GCIPL thicknesses were significantly different among 4 groups (P < .001). Macular structure losses were positively correlated with the MS values of the 24-2 SAP for RNFL, GCL, IPL, and GCIPL (R = 0.553, 0.636, 0.648 and 0.646, respectively, P < .001). In regression analysis, IPL and GCIPL thicknesses showed stronger association with the corresponding MS values of 24-2 SAP compared with RNFL and GCL thicknesses (R2 = 0.420, P < .001 for IPL; R2 = 0.417, P< .001 for GCIPL thickness). Segmented IPL thickness was significantly associated with the degree of glaucoma. Segmental analysis of the inner retinal layer including the IPL in macular region may provide valuable information for evaluating glaucoma.
Atuk, Oğuz; Özmen, M Utku
2017-05-01
The current tobacco taxation scheme in Turkey, a mix of high ad valorem tax and low specific tax, contains incentives for firms and consumers to change pricing and consumption patterns, respectively. The association between tax structure and price and tax revenue stability has not been studied in detail with micro data containing price segment information. In this study, we analyse whether incentives for firms and consumers undermine the effectiveness of tax policy in reducing consumption. We calculate alternative taxation scheme outcomes using differing ad valorem and specific tax rates through simulation analysis. We also estimate price elasticity of demand using detailed price and volume statistics between segments via regression analysis. A very high ad valorem rate provides strong incentives to firms to reduce prices. Therefore, this sort of tax strategy may induce even more consumption despite its initial aim of discouraging consumption. While higher prices dramatically reduce consumption of economy and medium price segment cigarettes, demand for premium segment cigarettes is found to be highly price-inelastic. The current tax scheme, based on both ad valorem and specific components, introduces various incentives to firms as well as to consumers which reduce the effectiveness of the tax policy. Therefore, on the basis of our theoretical predictions, an appropriate tax scheme should involve a balanced combination of ad valorem and specific rates, away from extreme ( ad valorem or specific dominant) cases to enhance the effectiveness of tax policy for curbing consumption. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Kok, Annette M; Nguyen, V Lai; Speelman, Lambert; Brands, Peter J; Schurink, Geert-Willem H; van de Vosse, Frans N; Lopata, Richard G P
2015-05-01
Abdominal aortic aneurysms (AAAs) are local dilations that can lead to a fatal hemorrhage when ruptured. Wall stress analysis of AAAs is a novel tool that has proven high potential to improve risk stratification. Currently, wall stress analysis of AAAs is based on computed tomography (CT) and magnetic resonance imaging; however, three-dimensional (3D) ultrasound (US) has great advantages over CT and magnetic resonance imaging in terms of costs, speed, and lack of radiation. In this study, the feasibility of 3D US as input for wall stress analysis is investigated. Second, 3D US-based wall stress analysis was compared with CT-based results. The 3D US and CT data were acquired in 12 patients (diameter, 35-90 mm). US data were segmented manually and compared with automatically acquired CT geometries by calculating the similarity index and Hausdorff distance. Wall stresses were simulated at P = 140 mm Hg and compared between both modalities. The similarity index of US vs CT was 0.75 to 0.91 (n = 12), with a median Hausdorff distance ranging from 4.8 to 13.9 mm, with the higher values found at the proximal and distal sides of the AAA. Wall stresses were in accordance with literature, and a good agreement was found between US- and CT-based median stresses and interquartile stresses, which was confirmed by Bland-Altman and regression analysis (n = 8). Wall stresses based on US were typically higher (+23%), caused by geometric irregularities due to the registration of several 3D volumes and manual segmentation. In future work, an automated US registration and segmentation approach is the essential point of improvement before pursuing large-scale patient studies. This study is a first step toward US-based wall stress analysis, which would be the modality of choice to monitor wall stress development over time because no ionizing radiation and contrast material are involved. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Complete regression of myocardial involvement associated with lymphoma following chemotherapy.
Vinicki, Juan Pablo; Cianciulli, Tomás F; Farace, Gustavo A; Saccheri, María C; Lax, Jorge A; Kazelian, Lucía R; Wachs, Adolfo
2013-09-26
Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence. We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an echocardiogram, presenting with a testicular mass and unilateral peripheral facial paralysis. On admission, electrocardiograms (ECG) revealed negative T-waves in all leads and ST-segment elevation in the inferior leads. On two-dimensional echocardiography, there was infiltration of the pericardium with mild effusion, infiltrative thickening of the aortic walls, both atria and the interatrial septum and a mildly depressed systolic function of both ventricles. An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma (T-LBL). Following the diagnosis and staging, chemotherapy was started. Twenty-two days after finishing the first cycle of chemotherapy, the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads. A follow-up Two-dimensional echocardiography confirmed regression of the myocardial infiltration. This case report illustrates a lymphoma presenting with testicular mass, unilateral peripheral facial paralysis and myocardial involvement, and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment. To our knowledge, there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis, with complete regression of myocardial involvement.
The Relationship Between Surface Curvature and Abdominal Aortic Aneurysm Wall Stress.
de Galarreta, Sergio Ruiz; Cazón, Aitor; Antón, Raúl; Finol, Ender A
2017-08-01
The maximum diameter (MD) criterion is the most important factor when predicting risk of rupture of abdominal aortic aneurysms (AAAs). An elevated wall stress has also been linked to a high risk of aneurysm rupture, yet is an uncommon clinical practice to compute AAA wall stress. The purpose of this study is to assess whether other characteristics of the AAA geometry are statistically correlated with wall stress. Using in-house segmentation and meshing algorithms, 30 patient-specific AAA models were generated for finite element analysis (FEA). These models were subsequently used to estimate wall stress and maximum diameter and to evaluate the spatial distributions of wall thickness, cross-sectional diameter, mean curvature, and Gaussian curvature. Data analysis consisted of statistical correlations of the aforementioned geometry metrics with wall stress for the 30 AAA inner and outer wall surfaces. In addition, a linear regression analysis was performed with all the AAA wall surfaces to quantify the relationship of the geometric indices with wall stress. These analyses indicated that while all the geometry metrics have statistically significant correlations with wall stress, the local mean curvature (LMC) exhibits the highest average Pearson's correlation coefficient for both inner and outer wall surfaces. The linear regression analysis revealed coefficients of determination for the outer and inner wall surfaces of 0.712 and 0.516, respectively, with LMC having the largest effect on the linear regression equation with wall stress. This work underscores the importance of evaluating AAA mean wall curvature as a potential surrogate for wall stress.
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of total Length of Femur From Its Fragments in South Indian Population
Solan, Shweta; Kulkarni, Roopa
2013-01-01
Introduction: Establishment of identity of deceased person also assumes a great medicolegal importance. To establish the identity of a person, stature is one of the criteria. To know stature of individual, length of long bones is needed. Aims and Objectives: To determine the lengths of the femoral fragments and to compare with the total length of femur in south Indian population, which will help to estimate the stature of the individual using standard regression formulae. Material and Methods: A number of 150, 72 left and 78 right adult fully ossified dry processed femora were taken. The femur bone was divided into five segments by taking predetermined points. Length of five segments and maximum length of femur were measured to the nearest millimeter. The values were obtained in cm [mean±S.D.] and the mean total length of femora on left and right side was measured. The proportion of segments to the total length was also calculated which will help for the stature estimation using standard regression formulae. Results: The mean total length of femora on left side was 43.54 ± 2.7 and on right side it was 43.42 ± 2.4. The measurements of the segments-1, 2, 3, 4 and 5 were 8.06± 0.71, 8.25± 1.24, 10.35 ± 2.21, 13.94 ± 1.93 and 2.77 ± 0.53 on left side and 8.09 ± 0.70, 8.30 ± 1.34, 10.44 ± 1.91, 13.50 ± 1.54 and 3.09 ± 0.41 on right side of femur. Conclusion: The sample size was 150, 72 left and 78 right and ‘p’ value of all the segments was significant (‹0.001). When comparison was made between segments of right and left femora, the ‘p’ value of segment-5 was found to be ‹0.001. Comparison between different segments of femur showed significance in all the segments. PMID:24298451
Baeßler, Bettina; Schaarschmidt, Frank; Dick, Anastasia; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C
2015-12-23
The purpose of the present study was to investigate the diagnostic value of T2-mapping in acute myocarditis (ACM) and to define cut-off values for edema detection. Cardiovascular magnetic resonance (CMR) data of 31 patients with ACM were retrospectively analyzed. 30 healthy volunteers (HV) served as a control. Additionally to the routine CMR protocol, T2-mapping data were acquired at 1.5 T using a breathhold Gradient-Spin-Echo T2-mapping sequence in six short axis slices. T2-maps were segmented according to the 16-segments AHA-model and segmental T2 values as well as the segmental pixel-standard deviation (SD) were analyzed. Mean differences of global myocardial T2 or pixel-SD between HV and ACM patients were only small, lying in the normal range of HV. In contrast, variation of segmental T2 values and pixel-SD was much larger in ACM patients compared to HV. In random forests and multiple logistic regression analyses, the combination of the highest segmental T2 value within each patient (maxT2) and the mean absolute deviation (MAD) of log-transformed pixel-SD (madSD) over all 16 segments within each patient proved to be the best discriminators between HV and ACM patients with an AUC of 0.85 in ROC-analysis. In classification trees, a combined cut-off of 0.22 for madSD and of 68 ms for maxT2 resulted in 83% specificity and 81% sensitivity for detection of ACM. The proposed cut-off values for maxT2 and madSD in the setting of ACM allow edema detection with high sensitivity and specificity and therefore have the potential to overcome the hurdles of T2-mapping for its integration into clinical routine.
Sánchez, Diego P; Guillén, José J; Torres, Alberto M; Arense, Julián J; López, Ángel; Sánchez, Fernando I
2015-01-01
In the past few decades, increasing pharmaceutical expenditures in Spain and other western countries led to the adoption of reforms in order to reduce this trend. The aim of our study was to analyze if reforms concerning the pharmaceutical reimbursement scheme in Spain have been associated with changes in the volume and trend of pharmaceutical consumption. Retrospective observational study. Region of Murcia. Prescription drug in primary care and external consultations. Records of prescribed medicines between January 1, 2008 and December 31, 2013. Segmented regression analysis of interrupted time-series of prescription drug consumption. Dispensing of all five therapeutic classes fell immediately after co-payment changes. The segmented regression model suggested that per patient drug consumption in pensioners may have decreased by about 6.76% (95% CI; -8.66% to -5.19%) in the twelve months after the reform, compared with the absence of such a policy. Furthermore the slope of the series of consumption increased from 6.08 (P<.001) to 12.17 (P<.019). The implementation of copayment policies could be associated with a significant decrease in the level of prescribed drug use in Murcia Region, but this effect seems to have been only temporary in the five therapeutic groups analyzed, since almost simultaneously there has been an increase in the growth trend. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.
Inferring Aquifer Transmissivity from River Flow Data
NASA Astrophysics Data System (ADS)
Trichakis, Ioannis; Pistocchi, Alberto
2016-04-01
Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.
Alterations of the tunica vasculosa lentis in the rat model of retinopathy of prematurity.
Favazza, Tara L; Tanimoto, Naoyuki; Munro, Robert J; Beck, Susanne C; Garcia Garrido, Marina; Seide, Christina; Sothilingam, Vithiyanjali; Hansen, Ronald M; Fulton, Anne B; Seeliger, Mathias W; Akula, James D
2013-08-01
To study the relationship between retinal and tunica vasculosa lentis (TVL) disease in retinopathy of prematurity (ROP). Although the clinical hallmark of ROP is abnormal retinal blood vessels, the vessels of the anterior segment, including the TVL, are also altered. ROP was induced in Long-Evans pigmented and Sprague Dawley albino rats; room-air-reared (RAR) rats served as controls. Then, fluorescein angiographic images of the TVL and retinal vessels were serially obtained with a scanning laser ophthalmoscope near the height of retinal vascular disease, ~20 days of age, and again at 30 and 64 days of age. Additionally, electroretinograms (ERGs) were obtained prior to the first imaging session. The TVL images were analyzed for percent coverage of the posterior lens. The tortuosity of the retinal arterioles was determined using Retinal Image multiScale Analysis (Gelman et al. in Invest Ophthalmol Vis Sci 46:4734-4738, 2005). In the youngest ROP rats, the TVL was dense, while in RAR rats, it was relatively sparse. By 30 days, the TVL in RAR rats had almost fully regressed, while in ROP rats, it was still pronounced. By the final test age, the TVL had completely regressed in both ROP and RAR rats. In parallel, the tortuous retinal arterioles in ROP rats resolved with increasing age. ERG components indicating postreceptoral dysfunction, the b-wave, and oscillatory potentials were attenuated in ROP rats. These findings underscore the retinal vascular abnormalities and, for the first time, show abnormal anterior segment vasculature in the rat model of ROP. There is delayed regression of the TVL in the rat model of ROP. This demonstrates that ROP is a disease of the whole eye.
Alterations of the Tunica Vasculosa Lentis in the Rat Model of Retinopathy of Prematurity
Favazza, Tara L; Tanimoto, Naoyuki; Munro, Robert J.; Beck, Susanne C.; Garrido, Marina G.; Seide, Christina; Sothilingam, Vithiyanjali; Hansen, Ronald M.; Fulton, Anne B.; Seeliger, Mathias W.; Akula, James D
2013-01-01
Purpose To study the relation between retinal and tunica vasculosa lentis (TVL) disease in ROP. Although the clinical hallmark of retinopathy of prematurity (ROP) is abnormal retinal blood vessels, the vessels of the anterior segment, including the TVL, are also altered. Methods ROP was induced in Long Evans pigmented and Sprague-Dawley albino rats; room-air-reared (RAR) rats served as controls. Then, fluorescein angiographic images of the TVL and retinal vessels were serially obtained with a scanning laser ophthalmoscope (SLO) near the height of retinal vascular disease, ∼20 days-of-age, and again at 30 and 64 days-of-age. Additionally, electroretinograms (ERGs) were obtained prior to the first imaging session. The TVL images were analyzed for percent coverage of the posterior lens. The tortuosity of the retinal arterioles was determined using Retinal Image multiScale Analysis (RISA; Gelman et al., 2005). Results In the youngest ROP rats, the TVL was dense, while in RAR rats, it was relatively sparse. By 30 days, the TVL in RAR rats had almost fully regressed, while in ROP rats it was still pronounced. By the final test age, the TVL had completely regressed in both ROP and RAR rats. In parallel, the tortuous retinal arterioles in ROP rats resolved with increasing age. ERG components indicating postreceptoral dysfunction, the b-wave and oscillatory potentials (OPs), were attenuated in ROP rats. Conclusions These findings underscore the retinal vascular abnormalities and, for the first time, show abnormal anterior segment vasculature in the rat model of ROP. There is delayed regression of the TVL in the rat model of ROP. This demonstrates that ROP is a disease of the whole eye. PMID:23748796
New method for calculating a mathematical expression for streamflow recession
Rutledge, Albert T.
1991-01-01
An empirical method has been devised to calculate the master recession curve, which is a mathematical expression for streamflow recession during times of negligible direct runoff. The method is based on the assumption that the storage-delay factor, which is the time per log cycle of streamflow recession, varies linearly with the logarithm of streamflow. The resulting master recession curve can be nonlinear. The method can be executed by a computer program that reads a data file of daily mean streamflow, then allows the user to select several near-linear segments of streamflow recession. The storage-delay factor for each segment is one of the coefficients of the equation that results from linear least-squares regression. Using results for each recession segment, a mathematical expression of the storage-delay factor as a function of the log of streamflow is determined by linear least-squares regression. The master recession curve, which is a second-order polynomial expression for time as a function of log of streamflow, is then derived using the coefficients of this function.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Interrupted time-series analysis of regulations to reduce paracetamol (acetaminophen) poisoning.
Morgan, Oliver W; Griffiths, Clare; Majeed, Azeem
2007-04-01
Paracetamol (acetaminophen) poisoning is the leading cause of acute liver failure in Great Britain and the United States. Successful interventions to reduced harm from paracetamol poisoning are needed. To achieve this, the government of the United Kingdom introduced legislation in 1998 limiting the pack size of paracetamol sold in shops. Several studies have reported recent decreases in fatal poisonings involving paracetamol. We use interrupted time-series analysis to evaluate whether the recent fall in the number of paracetamol deaths is different to trends in fatal poisoning involving aspirin, paracetamol compounds, antidepressants, or nondrug poisoning suicide. We calculated directly age-standardised mortality rates for paracetamol poisoning in England and Wales from 1993 to 2004. We used an ordinary least-squares regression model divided into pre- and postintervention segments at 1999. The model included a term for autocorrelation within the time series. We tested for changes in the level and slope between the pre- and postintervention segments. To assess whether observed changes in the time series were unique to paracetamol, we compared against poisoning deaths involving compound paracetamol (not covered by the regulations), aspirin, antidepressants, and nonpoisoning suicide deaths. We did this comparison by calculating a ratio of each comparison series with paracetamol and applying a segmented regression model to the ratios. No change in the ratio level or slope indicated no difference compared to the control series. There were about 2,200 deaths involving paracetamol. The age-standardised mortality rate rose from 8.1 per million in 1993 to 8.8 per million in 1997, subsequently falling to about 5.3 per million in 2004. After the regulations were introduced, deaths dropped by 2.69 per million (p = 0.003). Trends in the age-standardised mortality rate for paracetamol compounds, aspirin, and antidepressants were broadly similar to paracetamol, increasing until 1997 and then declining. Nondrug poisoning suicide also declined during the study period, but was highest in 1993. The segmented regression models showed that the age-standardised mortality rate for compound paracetamol dropped less after the regulations (p = 0.012) but declined more rapidly afterward (p = 0.031). However, age-standardised rates for aspirin and antidepressants fell in a similar way to paracetamol after the regulations. Nondrug poisoning suicide declined at a similar rate to paracetamol after the regulations were introduced. Introduction of regulations to limit availability of paracetamol coincided with a decrease in paracetamol-poisoning mortality. However, fatal poisoning involving aspirin, antidepressants, and to a lesser degree, paracetamol compounds, also showed similar trends. This raises the question whether the decline in paracetamol deaths was due to the regulations or was part of a wider trend in decreasing drug-poisoning mortality. We found little evidence to support the hypothesis that the 1998 regulations limiting pack size resulted in a greater reduction in poisoning deaths involving paracetamol than occurred for other drugs or nondrug poisoning suicide.
Interrupted Time-Series Analysis of Regulations to Reduce Paracetamol (Acetaminophen) Poisoning
Morgan, Oliver W; Griffiths, Clare; Majeed, Azeem
2007-01-01
Background Paracetamol (acetaminophen) poisoning is the leading cause of acute liver failure in Great Britain and the United States. Successful interventions to reduced harm from paracetamol poisoning are needed. To achieve this, the government of the United Kingdom introduced legislation in 1998 limiting the pack size of paracetamol sold in shops. Several studies have reported recent decreases in fatal poisonings involving paracetamol. We use interrupted time-series analysis to evaluate whether the recent fall in the number of paracetamol deaths is different to trends in fatal poisoning involving aspirin, paracetamol compounds, antidepressants, or nondrug poisoning suicide. Methods and Findings We calculated directly age-standardised mortality rates for paracetamol poisoning in England and Wales from 1993 to 2004. We used an ordinary least-squares regression model divided into pre- and postintervention segments at 1999. The model included a term for autocorrelation within the time series. We tested for changes in the level and slope between the pre- and postintervention segments. To assess whether observed changes in the time series were unique to paracetamol, we compared against poisoning deaths involving compound paracetamol (not covered by the regulations), aspirin, antidepressants, and nonpoisoning suicide deaths. We did this comparison by calculating a ratio of each comparison series with paracetamol and applying a segmented regression model to the ratios. No change in the ratio level or slope indicated no difference compared to the control series. There were about 2,200 deaths involving paracetamol. The age-standardised mortality rate rose from 8.1 per million in 1993 to 8.8 per million in 1997, subsequently falling to about 5.3 per million in 2004. After the regulations were introduced, deaths dropped by 2.69 per million (p = 0.003). Trends in the age-standardised mortality rate for paracetamol compounds, aspirin, and antidepressants were broadly similar to paracetamol, increasing until 1997 and then declining. Nondrug poisoning suicide also declined during the study period, but was highest in 1993. The segmented regression models showed that the age-standardised mortality rate for compound paracetamol dropped less after the regulations (p = 0.012) but declined more rapidly afterward (p = 0.031). However, age-standardised rates for aspirin and antidepressants fell in a similar way to paracetamol after the regulations. Nondrug poisoning suicide declined at a similar rate to paracetamol after the regulations were introduced. Conclusions Introduction of regulations to limit availability of paracetamol coincided with a decrease in paracetamol-poisoning mortality. However, fatal poisoning involving aspirin, antidepressants, and to a lesser degree, paracetamol compounds, also showed similar trends. This raises the question whether the decline in paracetamol deaths was due to the regulations or was part of a wider trend in decreasing drug-poisoning mortality. We found little evidence to support the hypothesis that the 1998 regulations limiting pack size resulted in a greater reduction in poisoning deaths involving paracetamol than occurred for other drugs or nondrug poisoning suicide. PMID:17407385
Preference mapping of dulce de leche commercialized in Brazilian markets.
Gaze, L V; Oliveira, B R; Ferrao, L L; Granato, D; Cavalcanti, R N; Conte Júnior, C A; Cruz, A G; Freitas, M Q
2015-03-01
Dulce de leche samples available in the Brazilian market were submitted to sensory profiling by quantitative descriptive analysis and acceptance test, as well sensory evaluation using the just-about-right scale and purchase intent. External preference mapping and the ideal sensory characteristics of dulce de leche were determined. The results were also evaluated by principal component analysis, hierarchical cluster analysis, partial least squares regression, artificial neural networks, and logistic regression. Overall, significant product acceptance was related to intermediate scores of the sensory attributes in the descriptive test, and this trend was observed even after consumer segmentation. The results obtained by sensometric techniques showed that optimizing an ideal dulce de leche from the sensory standpoint is a multidimensional process, with necessary adjustments on the appearance, aroma, taste, and texture attributes of the product for better consumer acceptance and purchase. The optimum dulce de leche was characterized by high scores for the attributes sweet taste, caramel taste, brightness, color, and caramel aroma in accordance with the preference mapping findings. In industrial terms, this means changing the parameters used in the thermal treatment and quantitative changes in the ingredients used in formulations. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahig, Houda; Simard, Dany; Létourneau, Laurent
Purpose: To determine the incidence of pseudoprogression (PP) after spine stereotactic body radiation therapy based on a detailed and quantitative assessment of magnetic resonance imaging (MRI) morphologic tumor alterations, and to identify predictive factors distinguishing PP from local recurrence (LR). Methods and Materials: A retrospective analysis of 35 patients with 49 spinal segments treated with spine stereotactic body radiation therapy, from 2009 to 2014, was conducted. The median number of follow-up MRI studies was 4 (range, 2-7). The gross tumor volumes (GTVs) within each of the 49 spinal segments were contoured on the pretreatment and each subsequent follow-up T1- andmore » T2-weighted MRI sagittal sequence. T2 signal intensity was reported as the mean intensity of voxels constituting each volume. LR was defined as persistent GTV enlargement on ≥2 serial MRI studies for ≥6 months or on pathologic confirmation. PP was defined as a GTV enlargement followed by stability or regression on subsequent imaging within 6 months. Kaplan-Meier analysis was used for estimation of actuarial local control, disease-free survival, and overall survival. Results: The median follow-up was 23 months (range, 1-39 months). PP was identified in 18% of treated segments (9 of 49) and LR in 29% (14 of 49). Earlier volume enlargement (5 months for PP vs 15 months for LR, P=.005), greater GTV to reference nonirradiated vertebral body T2 intensity ratio (+30% for PP vs −10% for LR, P=.005), and growth confined to 80% of the prescription isodose line (80% IDL) (8 of 9 PP cases vs 1 of 14 LR cases, P=.002) were associated with PP on univariate analysis. Multivariate analysis confirmed an earlier time to volume enlargement and growth within the 80% IDL as significant predictors of PP. LR involved the epidural space in all but 1 lesion, whereas PP was confined to the vertebral body in 7 of 9 cases. Conclusions: PP was observed in 18% of treated spinal segments. Tumor growth confined to the 80% IDL and earlier time to tumor enlargement were predictive for PP.« less
An ex post facto evaluation framework for place-based police interventions.
Braga, Anthony A; Hureau, David M; Papachristos, Andrew V
2011-12-01
A small but growing body of research evidence suggests that place-based police interventions generate significant crime control gains. While place-based policing strategies have been adopted by a majority of U.S. police departments, very few agencies make a priori commitments to rigorous evaluations. Recent methodological developments were applied to conduct a rigorous ex post facto evaluation of the Boston Police Department's Safe Street Team (SST) hot spots policing program. A nonrandomized quasi-experimental design was used to evaluate the violent crime control benefits of the SST program at treated street segments and intersections relative to untreated street segments and intersections. Propensity score matching techniques were used to identify comparison places in Boston. Growth curve regression models were used to analyze violent crime trends at treatment places relative to control places. UNITS OF ANALYSIS: Using computerized mapping and database software, a micro-level place database of violent index crimes at all street segments and intersections in Boston was created. Yearly counts of violent index crimes between 2000 and 2009 at the treatment and comparison street segments and intersections served as the key outcome measure. The SST program was associated with a statistically significant reduction in violent index crimes at the treatment places relative to the comparison places without displacing crime into proximate areas. To overcome the challenges of evaluation in real-world settings, evaluators need to continuously develop innovative approaches that take advantage of new theoretical and methodological approaches.
Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.
2005-01-01
NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.
Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A
2016-04-01
A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P < 0.001). Segmental lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P < 0.001) in 1182 patient from 24 study cohorts. Simple linear regression revealed a significant relationship between preoperative lumbar lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained (preserving vs. restoring alignment). 4.
Impact of posterior communicating artery on basilar artery steno-occlusive disease.
Hong, J M; Choi, J Y; Lee, J H; Yong, S W; Bang, O Y; Joo, I S; Huh, K
2009-12-01
Acute brainstem infarction with basilar artery (BA) occlusive disease is the most fatal type of all ischaemic strokes. This report investigates the prognostic impact of the posterior communicating artery (PcoA) and whether its anatomy is a safeguard or not. Consecutive patients who had acute brainstem infarction with at least 50% stenosis of BA upon CT angiography (CTA) were studied. The configuration of PcoA was divided into two groups upon CTA: "textbook" group (invisible PcoA with good P1 and P2 segment) and "fetal-variant of PcoA" group (only visible PcoA with absent P1 segment). Baseline demographics, radiological findings and stroke mechanisms were analysed. A multiple regression analysis was performed to predict clinical outcome at 30 days (modified Rankin disability Scale (mRS
Mannoji, Chikato; Murakami, Masazumi; Kinoshita, Tomoaki; Hirayama, Jiro; Miyashita, Tomohiro; Eguchi, Yawara; Yamazaki, Masashi; Suzuki, Takane; Aramomi, Masaaki; Ota, Mitsutoshi; Maki, Satoshi; Takahashi, Kazuhisa; Furuya, Takeo
2016-01-01
Study Design Retrospective case-control study. Purpose To determine whether kissing spine is a risk factor for recurrence of sciatica after lumbar posterior decompression using a spinous process floating approach. Overview of Literature Kissing spine is defined by apposition and sclerotic change of the facing spinous processes as shown in X-ray images, and is often accompanied by marked disc degeneration and decrement of disc height. If kissing spine significantly contributes to weight bearing and the stability of the lumbar spine, trauma to the spinous process might induce a breakdown of lumbar spine stability after posterior decompression surgery in cases of kissing spine. Methods The present study included 161 patients who had undergone posterior decompression surgery for lumbar canal stenosis using a spinous process floating approaches. We defined recurrence of sciatica as that resolved after initial surgery and then recurred. Kissing spine was defined as sclerotic change and the apposition of the spinous process in a plain radiogram. Preoperative foraminal stenosis was determined by the decrease of perineural fat intensity detected by parasagittal T1-weighted magnetic resonance imaging. Preoperative percentage slip, segmental range of motion, and segmental scoliosis were analyzed in preoperative radiographs. Univariate analysis followed by stepwise logistic regression analysis determined factors independently associated with recurrence of sciatica. Results Stepwise logistic regression revealed kissing spine (p=0.024; odds ratio, 3.80) and foraminal stenosis (p<0.01; odds ratio, 17.89) as independent risk factors for the recurrence of sciatica after posterior lumbar spinal decompression with spinous process floating procedures for lumbar spinal canal stenosis. Conclusions When a patient shows kissing spine and concomitant subclinical foraminal stenosis at the affected level, we should sufficiently discuss the selection of an appropriate surgical procedure. PMID:27994785
Koda, Masao; Mannoji, Chikato; Murakami, Masazumi; Kinoshita, Tomoaki; Hirayama, Jiro; Miyashita, Tomohiro; Eguchi, Yawara; Yamazaki, Masashi; Suzuki, Takane; Aramomi, Masaaki; Ota, Mitsutoshi; Maki, Satoshi; Takahashi, Kazuhisa; Furuya, Takeo
2016-12-01
Retrospective case-control study. To determine whether kissing spine is a risk factor for recurrence of sciatica after lumbar posterior decompression using a spinous process floating approach. Kissing spine is defined by apposition and sclerotic change of the facing spinous processes as shown in X-ray images, and is often accompanied by marked disc degeneration and decrement of disc height. If kissing spine significantly contributes to weight bearing and the stability of the lumbar spine, trauma to the spinous process might induce a breakdown of lumbar spine stability after posterior decompression surgery in cases of kissing spine. The present study included 161 patients who had undergone posterior decompression surgery for lumbar canal stenosis using a spinous process floating approaches. We defined recurrence of sciatica as that resolved after initial surgery and then recurred. Kissing spine was defined as sclerotic change and the apposition of the spinous process in a plain radiogram. Preoperative foraminal stenosis was determined by the decrease of perineural fat intensity detected by parasagittal T1-weighted magnetic resonance imaging. Preoperative percentage slip, segmental range of motion, and segmental scoliosis were analyzed in preoperative radiographs. Univariate analysis followed by stepwise logistic regression analysis determined factors independently associated with recurrence of sciatica. Stepwise logistic regression revealed kissing spine ( p =0.024; odds ratio, 3.80) and foraminal stenosis ( p <0.01; odds ratio, 17.89) as independent risk factors for the recurrence of sciatica after posterior lumbar spinal decompression with spinous process floating procedures for lumbar spinal canal stenosis. When a patient shows kissing spine and concomitant subclinical foraminal stenosis at the affected level, we should sufficiently discuss the selection of an appropriate surgical procedure.
Caravaggi, Paolo; Leardini, Alberto; Giacomozzi, Claudia
2016-10-03
Plantar load can be considered as a measure of the foot ability to transmit forces at the foot/ground, or foot/footwear interface during ambulatory activities via the lower limb kinematic chain. While morphological and functional measures have been shown to be correlated with plantar load, no exhaustive data are currently available on the possible relationships between range of motion of foot joints and plantar load regional parameters. Joints' kinematics from a validated multi-segmental foot model were recorded together with plantar pressure parameters in 21 normal-arched healthy subjects during three barefoot walking trials. Plantar pressure maps were divided into six anatomically-based regions of interest associated to corresponding foot segments. A stepwise multiple regression analysis was performed to determine the relationships between pressure-based parameters, joints range of motion and normalized walking speed (speed/subject height). Sagittal- and frontal-plane joint motion were those most correlated to plantar load. Foot joints' range of motion and normalized walking speed explained between 6% and 43% of the model variance (adjusted R 2 ) for pressure-based parameters. In general, those joints' presenting lower mobility during stance were associated to lower vertical force at forefoot and to larger mean and peak pressure at hindfoot and forefoot. Normalized walking speed was always positively correlated to mean and peak pressure at hindfoot and forefoot. While a large variance in plantar pressure data is still not accounted for by the present models, this study provides statistical corroboration of the close relationship between joint mobility and plantar pressure during stance in the normal healthy foot. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tippett, Elizabeth C; Chen, Brian K
2015-12-01
Attorneys sponsor television advertisements that include repeated warnings about adverse drug events to solicit consumers for lawsuits against drug manufacturers. The relationship between such advertising, safety actions by the US Food and Drug Administration (FDA), and healthcare use is unknown. To investigate the relationship between attorney advertising, FDA actions, and prescription drug claims. The study examined total users per month and prescription rates for seven drugs with substantial attorney advertising volume and FDA or other safety interventions during 2009. Segmented regression analysis was used to detect pre-intervention trends, post-intervention level changes, and changes in post-intervention trends relative to the pre-intervention trends in the use of these seven drugs, using advertising volume, media hits, and the number of Medicare enrollees as covariates. Data for these variables were obtained from the Center for Medicare and Medicaid Services, Kantar Media, and LexisNexis. Several types of safety actions were associated with reductions in drug users and/or prescription rates, particularly for fentanyl, varenicline, and paroxetine. In most cases, attorney advertising volume rose in conjunction with major safety actions. Attorney advertising volume was positively correlated with prescription rates in five of seven drugs, likely because advertising volume began rising before safety actions, when prescription rates were still increasing. On the other hand, attorney advertising had mixed associations with the number of users per month. Regulatory and safety actions likely reduced the number of users and/or prescription rates for some drugs. Attorneys may have strategically chosen to begin advertising adverse drug events prior to major safety actions, but we found little evidence that attorney advertising reduced drug use. Further research is needed to better understand how consumers and physicians respond to attorney advertising.
Zandian, Hamed; Takian, Amirhossein; Rashidian, Arash; Bayati, Mohsen; Zahirian Moghadam, Telma; Rezaei, Satar; Olyaeemanesh, Alireza
2018-03-01
One of the main objectives of the Targeted Subsidies Law (TSL) in Iran was to improve equity in healthcare financing. This study aimed at measuring the effects of the TSL, which was implemented in Iran in 2010, on equity in healthcare financing. Segmented regression analysis was applied to assess the effects of TSL implementation on the Gini and Kakwani indices of outcome variables in Iranian households. Data for the years 1977-2014 were retrieved from formal databases. Changes in the levels and trends of the outcome variables before and after TSL implementation were assessed using Stata version 13. In the 33 years before the implementation of the TSL, the Gini index decreased from 0.401 to 0.381. The Gini index and its intercept significantly decreased to 0.362 (p<0.001) 5 years after the implementation of the TSL. There was no statistically significant change in the gross domestic product or inflation rate after TSL implementation. The Kakwani index significantly increased from -0.020 to 0.007 (p<0.001) before the implementation of the TSL, while we observed no statistically significant change (p=0.81) in the Kakwani index after TSL implementation. The TSL reform, which was introduced as part of an economic development plan in Iran in 2010, led to a significant reduction in households' income inequality. However, the TSL did not significantly affect equity in healthcare financing. Hence, while measuring the long-term impact of TSL is paramount, healthcare decision-makers need to consider the efficacy of the TSL in order to develop plans for achieving the desired equity in healthcare financing.
Bayati, Mohsen
2018-01-01
Objectives One of the main objectives of the Targeted Subsidies Law (TSL) in Iran was to improve equity in healthcare financing. This study aimed at measuring the effects of the TSL, which was implemented in Iran in 2010, on equity in healthcare financing. Methods Segmented regression analysis was applied to assess the effects of TSL implementation on the Gini and Kakwani indices of outcome variables in Iranian households. Data for the years 1977-2014 were retrieved from formal databases. Changes in the levels and trends of the outcome variables before and after TSL implementation were assessed using Stata version 13. Results In the 33 years before the implementation of the TSL, the Gini index decreased from 0.401 to 0.381. The Gini index and its intercept significantly decreased to 0.362 (p<0.001) 5 years after the implementation of the TSL. There was no statistically significant change in the gross domestic product or inflation rate after TSL implementation. The Kakwani index significantly increased from -0.020 to 0.007 (p<0.001) before the implementation of the TSL, while we observed no statistically significant change (p=0.81) in the Kakwani index after TSL implementation. Conclusions The TSL reform, which was introduced as part of an economic development plan in Iran in 2010, led to a significant reduction in households’ income inequality. However, the TSL did not significantly affect equity in healthcare financing. Hence, while measuring the long-term impact of TSL is paramount, healthcare decision-makers need to consider the efficacy of the TSL in order to develop plans for achieving the desired equity in healthcare financing. PMID:29631352
Random regression analyses using B-spline functions to model growth of Nellore cattle.
Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G
2012-02-01
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.
Complete regression of myocardial involvement associated with lymphoma following chemotherapy
Vinicki, Juan Pablo; Cianciulli, Tomás F; Farace, Gustavo A; Saccheri, María C; Lax, Jorge A; Kazelian, Lucía R; Wachs, Adolfo
2013-01-01
Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence. We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an echocardiogram, presenting with a testicular mass and unilateral peripheral facial paralysis. On admission, electrocardiograms (ECG) revealed negative T-waves in all leads and ST-segment elevation in the inferior leads. On two-dimensional echocardiography, there was infiltration of the pericardium with mild effusion, infiltrative thickening of the aortic walls, both atria and the interatrial septum and a mildly depressed systolic function of both ventricles. An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma (T-LBL). Following the diagnosis and staging, chemotherapy was started. Twenty-two days after finishing the first cycle of chemotherapy, the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads. A follow-up Two-dimensional echocardiography confirmed regression of the myocardial infiltration. This case report illustrates a lymphoma presenting with testicular mass, unilateral peripheral facial paralysis and myocardial involvement, and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment. To our knowledge, there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis, with complete regression of myocardial involvement. PMID:24109501
Richards, C H; Ventham, N T; Mansouri, D; Wilson, M; Ramsay, G; Mackay, C D; Parnaby, C N; Smith, D; On, J; Speake, D; McFarlane, G; Neo, Y N; Aitken, E; Forrest, C; Knight, K; McKay, A; Nair, H; Mulholland, C; Robertson, J H; Carey, F A; Steele, Rjc
2018-02-01
Colorectal polyp cancers present clinicians with a treatment dilemma. Decisions regarding whether to offer segmental resection or endoscopic surveillance are often taken without reference to good quality evidence. The aim of this study was to develop a treatment algorithm for patients with screen-detected polyp cancers. This national cohort study included all patients with a polyp cancer identified through the Scottish Bowel Screening Programme between 2000 and 2012. Multivariate regression analysis was used to assess the impact of clinical, endoscopic and pathological variables on the rate of adverse events (residual tumour in patients undergoing segmental resection or cancer-related death or disease recurrence in any patient). These data were used to develop a clinically relevant treatment algorithm. 485 patients with polyp cancers were included. 186/485 (38%) underwent segmental resection and residual tumour was identified in 41/186 (22%). The only factor associated with an increased risk of residual tumour in the bowel wall was incomplete excision of the original polyp (OR 5.61, p=0.001), while only lymphovascular invasion was associated with an increased risk of lymph node metastases (OR 5.95, p=0.002). When patients undergoing segmental resection or endoscopic surveillance were considered together, the risk of adverse events was significantly higher in patients with incomplete excision (OR 10.23, p<0.001) or lymphovascular invasion (OR 2.65, p=0.023). A policy of surveillance is adequate for the majority of patients with screen-detected colorectal polyp cancers. Consideration of segmental resection should be reserved for those with incomplete excision or evidence of lymphovascular invasion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Chen, Renai; Mao, Xinjie; Jiang, Jun; Shen, Meixiao; Lian, Yan; Zhang, Bin; Lu, Fan
2017-05-01
To investigate the relationship between corneal biomechanics and anterior segment parameters in the early stage of overnight orthokeratology.Twenty-three eyes from 23 subjects were involved in the study. Corneal biomechanics, including corneal hysteresis (CH) and corneal resistance factor (CRF), and parameters of the anterior segment, including corneal curvature, central corneal thickness (CCT), and corneal sublayers' thickness, were measured at baseline and day 1 and 7 after wearing orthokeratology lens. One-way analysis of variance with repeated measures was used to compare the longitudinal changes and partial least squares linear regression was used to explore the relationship between corneal biomechanics and anterior segment parameters.At baseline, CH and CRF were positively correlated with CCT (r = 0.244, P = .008 for CH; r = 0.249, P < .001 for CRF), central stroma thickness (CST) (r = 0.241, P = .008 for CH; r = 0.244, P = .002 for CRF) and central Bowman layer thickness (CBT) (r = 0.138, P = .039 for CH; r = 0.171, P = .006 for CRF). Both CH and CRF significantly decreased from day 1 after orthokeratology. The corneal curvature and the epithelium thickness also significantly decreased, while the stromal layer thickened significantly from day 1 after orthokeratology. There was no correlation between the changes of corneal biomechanics and anterior segment parameters at day 1 and 7 after orthokeratology.While corneal biomechanics were positively correlated with CCT, CST, and CBT, the changes of CH and CRF were not correlated with the changes of corneal curvature, CCT, and corneal sublayers' thickness in the early stage of orthokeratology in our study.
Change with age in regression construction of fat percentage for BMI in school-age children.
Fujii, Katsunori; Mishima, Takaaki; Watanabe, Eiji; Seki, Kazuyoshi
2011-01-01
In this study, curvilinear regression was applied to the relationship between BMI and body fat percentage, and an analysis was done to see whether there are characteristic changes in that curvilinear regression from elementary to middle school. Then, by simultaneously investigating the changes with age in BMI and body fat percentage, the essential differences in BMI and body fat percentage were demonstrated. The subjects were 789 boys and girls (469 boys, 320 girls) aged 7.5 to 14.5 years from all parts of Japan who participated in regular sports activities. Body weight, total body water (TBW), soft lean mass (SLM), body fat percentage, and fat mass were measured with a body composition analyzer (Tanita BC-521 Inner Scan), using segmental bioelectrical impedance analysis & multi-frequency bioelectrical impedance analysis. Height was measured with a digital height measurer. Body mass index (BMI) was calculated as body weight (km) divided by the square of height (m). The results for the validity of regression polynomials of body fat percentage against BMI showed that, for both boys and girls, first-order polynomials were valid in all school years. With regard to changes with age in BMI and body fat percentage, the results showed a temporary drop at 9 years in the aging distance curve in boys, followed by an increasing trend. Peaks were seen in the velocity curve at 9.7 and 11.9 years, but the MPV was presumed to be at 11.9 years. Among girls, a decreasing trend was seen in the aging distance curve, which was opposite to the changes in the aging distance curve for body fat percentage.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Digital data used to relate nutrient inputs to water quality in the Chesapeake Bay watershed
Brakebill, John W.; Preston, Stephen D.
1999-01-01
Digital data sets were compiled by the U. S. Geological Survey (USGS) and used as input for a collection of Spatially Referenced Regressions On Watershed attributes for the Chesapeake Bay region. These regressions relate streamwater loads to nutrient sources and the factors that affect the transport of these nutrients throughout the watershed. A digital segmented network based on watershed boundaries serves as the primary foundation for spatially referencing total nitrogen and total phosphorus source and land-surface characteristic data sets within a Geographic Information System. Digital data sets of atmospheric wet deposition of nitrate, point-source discharge locations, land cover, and agricultural sources such as fertilizer and manure were created and compiled from numerous sources and represent nitrogen and phosphorus inputs. Some land-surface characteristics representing factors that affect the transport of nutrients include land use, land cover, average annual precipitation and temperature, slope, and soil permeability. Nutrient input and land-surface characteristic data sets merged with the segmented watershed network provide the spatial detail by watershed segment required by the models. Nutrient stream loads were estimated for total nitrogen, total phosphorus, nitrate/nitrite, amonium, phosphate, and total suspended soilds at as many as 109 sites within the Chesapeake Bay watershed. The total nitrogen and total phosphorus load estimates are the dependent variables for the regressions and were used for model calibration. Other nutrient-load estimates may be used for calibration in future applications of the models.
Dong, Liang; Xu, Zhengwei; Chen, Xiujin; Wang, Dongqi; Li, Dichen; Liu, Tuanjing; Hao, Dingjun
2017-10-01
Many meta-analyses have been performed to study the efficacy of cervical disc arthroplasty (CDA) compared with anterior cervical discectomy and fusion (ACDF); however, there are few data referring to adjacent segment within these meta-analyses, or investigators are unable to arrive at the same conclusion in the few meta-analyses about adjacent segment. With the increased concerns surrounding adjacent segment degeneration (ASDeg) and adjacent segment disease (ASDis) after anterior cervical surgery, it is necessary to perform a comprehensive meta-analysis to analyze adjacent segment parameters. To perform a comprehensive meta-analysis to elaborate adjacent segment motion, degeneration, disease, and reoperation of CDA compared with ACDF. Meta-analysis of randomized controlled trials (RCTs). PubMed, Embase, and Cochrane Library were searched for RCTs comparing CDA and ACDF before May 2016. The analysis parameters included follow-up time, operative segments, adjacent segment motion, ASDeg, ASDis, and adjacent segment reoperation. The risk of bias scale was used to assess the papers. Subgroup analysis and sensitivity analysis were used to analyze the reason for high heterogeneity. Twenty-nine RCTs fulfilled the inclusion criteria. Compared with ACDF, the rate of adjacent segment reoperation in the CDA group was significantly lower (p<.01), and the advantage of that group in reducing adjacent segment reoperation increases with increasing follow-up time by subgroup analysis. There was no statistically significant difference in ASDeg between CDA and ACDF within the 24-month follow-up period; however, the rate of ASDeg in CDA was significantly lower than that of ACDF with the increase in follow-up time (p<.01). There was no statistically significant difference in ASDis between CDA and ACDF (p>.05). Cervical disc arthroplasty provided a lower adjacent segment range of motion (ROM) than did ACDF, but the difference was not statistically significant. Compared with ACDF, the advantages of CDA were lower ASDeg and adjacent segment reoperation. However, there was no statistically significant difference in ASDis and adjacent segment ROM. Copyright © 2017 Elsevier Inc. All rights reserved.
Marshall, Caroline; Richards, Michael; McBryde, Emma
2013-01-01
Consensus for methicillin-resistant Staphylococcus aureus (MRSA) control has still not been reached. We hypothesised that use of rapid MRSA detection followed by contact precautions and single room isolation would reduce MRSA acquisition. This study was a pre-planned prospective interrupted time series comparing rapid PCR detection and use of long sleeved gowns and gloves (contact precautions) plus single room isolation or cohorting of MRSA colonised patients with a control group. The study took place in a medical-surgical intensive care unit of a tertiary adult hospital between May 21(st) 2007 and September 21(st) 2009. The primary outcome was the rate of MRSA acquisition. A segmented regression analysis was performed to determine the trend in MRSA acquisition rates before and after the intervention. The rate of MRSA acquisition was 18.5 per 1000 at risk patient days in the control phase and 7.9 per 1000 at-risk patient days in the intervention phase, with an adjusted hazard ratio 0.39 (95% CI 0.24 to 0.62). Segmented regression analysis showed a decline in MRSA acquisition of 7% per month in the intervention phase, (95%CI 1.9% to 12.8% reduction) which was a significant change in slope compared with the control phase. Secondary analysis found prior exposure to anaerobically active antibiotics and colonization pressure were associated with increased acquisition risk. Contact precautions with single room isolation or cohorting were associated with a 60% reduction in MRSA acquisition. While this study was a quasi-experimental design, many measures were taken to strengthen the study, such as accounting for differences in colonisation pressure, hand hygiene compliance and individual risk factors across the groups, and confining the study to one centre to reduce variation in transmission. Use of two research nurses may limit its generalisability to units in which this level of support is available.
3D Multi-segment foot kinematics in children: A developmental study in typically developing boys.
Deschamps, Kevin; Staes, Filip; Peerlinck, Kathelijne; Van Geet, Christel; Hermans, Cedric; Matricali, Giovanni Arnoldo; Lobet, Sebastien
2017-02-01
The relationship between age and 3D rotations objectivized with multisegment foot models has not been quantified until now. The purpose of this study was therefore to investigate the relationship between age and multi-segment foot kinematics in a cross-sectional database. Barefoot multi-segment foot kinematics of thirty two typically developing boys, aged 6-20 years, were captured with the Rizzoli Multi-segment Foot Model. One-dimensional statistical parametric mapping linear regression was used to examine the relationship between age and 3D inter-segment rotations of the dominant leg during the full gait cycle. Age was significantly correlated with sagittal plane kinematics of the midfoot and the calcaneus-metatarsus inter-segment angle (p<0.0125). Age was also correlated with the transverse plane kinematics of the calcaneus-metatarsus angle (p<0.0001). Gait labs should consider age related differences and variability if optimal decision making is pursued. It remains unclear if this is of interest for all foot models, however, the current study highlights that this is of particular relevance for foot models which incorporate a separate midfoot segment. Copyright © 2016 Elsevier B.V. All rights reserved.
A comprehensive segmentation analysis of crude oil market based on time irreversibility
NASA Astrophysics Data System (ADS)
Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi
2016-05-01
In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.
Short segment search method for phylogenetic analysis using nested sliding windows
NASA Astrophysics Data System (ADS)
Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.
2017-10-01
To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.
Inequality and adolescent violence: an exploration of community, family, and individual factors.
Bruce, Marino A.
2004-01-01
PURPOSE: The study seeks to examine whether the relationships among community, family, individual factors, and violent behavior are parallel across race- and gender-specific segments of the adolescent population. METHODS: Data from the National Longitudinal Study of Adolescent Health are analyzed to highlight the complex relationships between inequality, community, family, individual behavior, and violence. RESULTS: The results from robust regression analysis provide evidence that social environmental factors can influence adolescent violence in race- and gender-specific ways. CONCLUSIONS: Findings from this study establish the plausibility of multidimensional models that specify a complex relationship between inequality and adolescent violence. PMID:15101669
"The home infusion patient": patient profiles for the home infusion therapy market.
Westbrook, K W; Powers, T
1999-01-01
The authors review the relevant literature regarding home health care patient profiles. An empirical analysis is provided from archival data for a home infusion company servicing patients in urban and rural areas. The results are provided as a 2 x 2 matrix for patients in urban and rural areas seeing either a specialist or primary care physicians. A series of moderated regressions indicate that type of treating physician, patient's gender, geographic residence and level of acuity are cogent in predicting the complexity of prescribed infusion therapies. Managerial implications are provided for the home care marketer in segmenting patient markets for infusion services.
Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.
Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
Ohishi, Tsuyoshi; Suzuki, Daisuke; Yamamoto, Kazufumi; Banno, Tomohiro; Shimizu, Yuta; Matsuyama, Yukihiro
2014-01-01
To evaluate medial extrusion of the posterior segment of the medial meniscus in posterior horn tears. This study enrolled 72 patients without medial meniscal tears (group N), 72 patients with medial meniscal tears without posterior horn tears (group PH-), 44 patients with posterior horn tears of the medial meniscus (group PH+). All meniscal tears were confirmed by arthroscopy. Medial extrusion of the middle segment and the posterior segment was measured on coronal MRIs. Extrusions of both middle and posterior segments in groups PH- and PH+ (middle segment; 2.94±1.51 mm for group PH- and 3.75±1.69 mm for group PH+, posterior segment; 1.85±1.82 mm for group PH- and 4.59±2.74 mm for group PH+) were significantly larger than those in group N (middle segment; 2.04±1.20, posterior segment; 1.21±1.86). Both indicators of extrusion in group PH+ were larger than those in group PH-. In the early OA category, neither middle nor posterior segment in group PH- extruded more than in group N. However, only the posterior segment in group PH+ extruded significantly more than in group N. Multiple lineal regression analyses revealed that posterior segment extrusion was strongly correlated with the posterior horn tears (p<0.001) among groups PH- and PH+. The newly presented indicator for extrusion of the posterior segment of the medial meniscus is associated with posterior horn tears in comparison with the extrusion of the middle segment, especially in the early stages of osteoarthritis. Level II--Diagnostic Study. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.
Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman
2018-05-16
Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.
IMU-to-Segment Assignment and Orientation Alignment for the Lower Body Using Deep Learning
2018-01-01
Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles. Existing approaches for I2S assignment usually rely on hand crafted features and shallow classification approaches (e.g., support vector machines), with no agreement regarding the most suitable features for the assignment task. Moreover, estimating the complete orientation alignment of an IMU relative to the segment it is attached to using a machine learning approach has not been shown in literature so far. This is likely due to the high amount of training data that have to be recorded to suitably represent possible IMU alignment variations. In this work, we propose online approaches for solving the assignment and alignment tasks for an arbitrary amount of IMUs with respect to a biomechanical lower body model using a deep learning architecture and windows of 128 gyroscope and accelerometer data samples. For this, we combine convolutional neural networks (CNNs) for local filter learning with long-short-term memory (LSTM) recurrent networks as well as generalized recurrent units (GRUs) for learning time dynamic features. The assignment task is casted as a classification problem, while the alignment task is casted as a regression problem. In this framework, we demonstrate the feasibility of augmenting a limited amount of real IMU training data with simulated alignment variations and IMU data for improving the recognition/estimation accuracies. With the proposed approaches and final models we achieved 98.57% average accuracy over all segments for the I2S assignment task (100% when excluding left/right switches) and an average median angle error over all segments and axes of 2.91° for the I2S alignment task. PMID:29351262
Improvements in analysis techniques for segmented mirror arrays
NASA Astrophysics Data System (ADS)
Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.
2016-08-01
The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R
2015-05-01
We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Sun, Hongliu; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir; Kazerooni, Ella
2018-02-01
We are developing automated radiopathomics method for diagnosis of lung nodule subtypes. In this study, we investigated the feasibility of using quantitative methods to analyze the tumor nuclei and cytoplasm in pathologic wholeslide images for the classification of pathologic subtypes of invasive nodules and pre-invasive nodules. We developed a multiscale blob detection method with watershed transform (MBD-WT) to segment the tumor cells. Pathomic features were extracted to characterize the size, morphology, sharpness, and gray level variation in each segmented nucleus and the heterogeneity patterns of tumor nuclei and cytoplasm. With permission of the National Lung Screening Trial (NLST) project, a data set containing 90 digital haematoxylin and eosin (HE) whole-slide images from 48 cases was used in this study. The 48 cases contain 77 regions of invasive subtypes and 43 regions of pre-invasive subtypes outlined by a pathologist on the HE images using the pathological tumor region description provided by NLST as reference. A logistic regression model (LRM) was built using leave-one-case-out resampling and receiver operating characteristic (ROC) analysis for classification of invasive and pre-invasive subtypes. With 11 selected features, the LRM achieved a test area under the ROC curve (AUC) value of 0.91+/-0.03. The results demonstrated that the pathologic invasiveness of lung adenocarcinomas could be categorized with high accuracy using pathomics analysis.
Insurees' preferences in hospital choice-A population-based study.
Schuldt, Johannes; Doktor, Anna; Lichters, Marcel; Vogt, Bodo; Robra, Bernt-Peter
2017-10-01
In Germany, the patient himself makes the choice for or against a health service provider. Hospital comparison websites offer him possibilities to inform himself before choosing. However, it remains unclear, how health care consumers use those websites, and there is little information about how preferences in hospital choice differ interpersonally. We conducted a Discrete-Choice-Experiment (DCE) on hospital choice with 1500 randomly selected participants (age 40-70) in three different German cities selecting four attributes for hospital vignettes. The analysis of the study draws on multilevel mixed effects logit regression analyses with the dependent variables: "chance to select a hospital" and "choice confidence". Subsequently, we performed a Latent-Class-Analysis to uncover consumer segments with distinct preferences. 590 of the questionnaires were evaluable. All four attributes of the hospital vignettes have a significant impact on hospital choice. The attribute "complication rate" exerts the highest impact on consumers' decisions and reported choice confidence. Latent-Class-Analysis results in one dominant consumer segment that considered the complication rate the most important decision criterion. Using DCE, we were able to show that the complication rate is an important trusted criterion in hospital choice to a large group of consumers. Our study supports current governmental efforts in Germany to concentrate the provision of specialized health care services. We suggest further national and cross-national research on the topic. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Ward, W K; Engle, J M; Branigan, D; El Youssef, J; Massoud, R G; Castle, J R
2012-08-01
Because declining glucose levels should be detected quickly in persons with Type 1 diabetes, a lag between blood glucose and subcutaneous sensor glucose can be problematic. It is unclear whether the magnitude of sensor lag is lower during falling glucose than during rising glucose. Initially, we analysed 95 data segments during which glucose changed and during which very frequent reference blood glucose monitoring was performed. However, to minimize confounding effects of noise and calibration error, we excluded data segments in which there was substantial sensor error. After these exclusions, and combination of data from duplicate sensors, there were 72 analysable data segments (36 for rising glucose, 36 for falling). We measured lag in two ways: (1) the time delay at the vertical mid-point of the glucose change (regression delay); and (2) determination of the optimal time shift required to minimize the difference between glucose sensor signals and blood glucose values drawn concurrently. Using the regression delay method, the mean sensor lag for rising vs. falling glucose segments was 8.9 min (95%CI 6.1-11.6) vs. 1.5 min (95%CI -2.6 to 5.5, P<0.005). Using the time shift optimization method, results were similar, with a lag that was higher for rising than for falling segments [8.3 (95%CI 5.8-10.7) vs. 1.5 min (95% CI -2.2 to 5.2), P<0.001]. Commensurate with the lag results, sensor accuracy was greater during falling than during rising glucose segments. In Type 1 diabetes, when noise and calibration error are minimized to reduce effects that confound delay measurement, subcutaneous glucose sensors demonstrate a shorter lag duration and greater accuracy when glucose is falling than when rising. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
NASA Astrophysics Data System (ADS)
Sankar, R. D.; Murray, M. S.; Wells, P.
2016-12-01
Increased accuracy in estimating coastal change along localized segments of the Canadian Arctic coast is essential, in order to identify plausible adaptation initiatives to deal with the effects of climate change. This paper quantifies rates of shoreline movement along an 11 km segment of the Hamlet of Paulatuk (Northwest Territories, Canada), using an innovative modelling technique - Analyzing Moving Boundaries Using R (AMBUR). Approximately two dozen shorelines, obtained from high-resolution Landsat satellite imagery were analyzed. Shorelines were extracted using the band ratio method and compiled in ArcMapTM to determine decadal trends of coastal change. The unique geometry of Paulatuk facilitated an independent analysis of the western and eastern sections of the study area. Long-term (1984-2014) and short-term (1984-2003) erosion and accretion rates were calculated using the Linear Regression and End Point Rate methods respectively. Results reveal an elevated rate of erosion for the western section of the hamlet over the long-term (-1.1 m/yr), compared to the eastern portion (-0.92 m/yr). The study indicates a significant alongshore increase in the rates of erosion on both portions of the study area, over the short-term period 1984 to 2003. Mean annual erosion rates increased over the short-term along the western segment (-1.4 m/yr), while the eastern shoreline retreated at a rate of -1.3 m/yr over the same period. The analysis indicates that an amalgamation of factors may be responsible for the patterns of land loss experienced along Paulatuk. These include increased sea-surface temperature coupled with dwindling arctic ice and elevated storm hydrodynamics. The analysis further reveals that the coastline along the eastern portion of the hamlet, where the majority of the population reside, is vulnerable to a high rate of shoreline erosion.
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...
2017-12-28
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
NASA Astrophysics Data System (ADS)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.
2018-03-01
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
Kukla, Piotr; Kosior, Dariusz A; Tomaszewski, Andrzej; Ptaszyńska-Kopczyńska, Katarzyna; Widejko, Katarzyna; Długopolski, Robert; Skrzyński, Andrzej; Błaszczak, Piotr; Fijorek, Kamil; Kurzyna, Marcin
2017-07-01
Electrocardiography (ECG) is still one of the first tests performed at admission, mostly in patients (pts) with chest pain or dyspnea. The aim of this study was to assess the correlation between electrocardiographic abnormalities and cardiac biomarkers as well as echocardiographic parameter in patients with acute pulmonary embolism. We performed a retrospective analysis of 614 pts. (F/M 334/280; mean age of 67.9 ± 16.6 years) with confirmed acute pulmonary embolism (APE) who were enrolled to the ZATPOL-2 Registry between 2012 and 2014. Elevated cardiac biomarkers were observed in 358 pts (74.4%). In this group the presence of atrial fibrillation (p = .008), right axis deviation (p = .004), S 1 Q 3 T 3 sign (p < .001), RBBB (p = .006), ST segment depression in leads V 4 -V 6 (p < .001), ST segment depression in lead I (p = .01), negative T waves in leads V 1 -V 3 (p < .001), negative T waves in leads V 4 -V 6 (p = .005), negative T waves in leads II, III and aVF (p = .005), ST segment elevation in lead aVR (p = .002), ST segment elevation in lead III (p = .0038) was significantly more frequent in comparison to subjects with normal serum level of cardiac biomarkers. In multivariate regression analysis, clinical predictors of "abnormal electrocardiogram" were as follows: increased heart rate (OR 1.09, 95% CI 1.02-1.17, p = .012), elevated troponin concentration (OR 3.33, 95% CI 1.94-5.72, p = .000), and right ventricular overload (OR 2.30, 95% CI 1.17-4.53, p = .016). Electrocardiographic signs of right ventricular strain are strongly related to elevated cardiac biomarkers and echocardiographic signs of right ventricular overload. ECG may be used in preliminary risk stratification of patient with intermediate- or high-risk forms of APE. © 2017 Wiley Periodicals, Inc.
Segmentation and analysis of mouse pituitary cells with graphic user interface (GUI)
NASA Astrophysics Data System (ADS)
González, Erika; Medina, Lucía.; Hautefeuille, Mathieu; Fiordelisio, Tatiana
2018-02-01
In this work we present a method to perform pituitary cell segmentation in image stacks acquired by fluorescence microscopy from pituitary slice preparations. Although there exist many procedures developed to achieve cell segmentation tasks, they are generally based on the edge detection and require high resolution images. However in the biological preparations that we worked on, the cells are not well defined as experts identify their intracellular calcium activity due to fluorescence intensity changes in different regions over time. This intensity changes were associated with time series over regions, and because they present a particular behavior they were used into a classification procedure in order to perform cell segmentation. Two logistic regression classifiers were implemented for the time series classification task using as features the area under the curve and skewness in the first classifier and skewness and kurtosis in the second classifier. Once we have found both decision boundaries in two different feature spaces by training using 120 time series, the decision boundaries were tested over 12 image stacks through a python graphical user interface (GUI), generating binary images where white pixels correspond to cells and the black ones to background. Results show that area-skewness classifier reduces the time an expert dedicates in locating cells by up to 75% in some stacks versus a 92% for the kurtosis-skewness classifier, this evaluated on the number of regions the method found. Due to the promising results, we expect that this method will be improved adding more relevant features to the classifier.
Recent changes in the trends of teen birth rates, 1981-2006.
Wingo, Phyllis A; Smith, Ruben A; Tevendale, Heather D; Ferré, Cynthia
2011-03-01
To explore trends in teen birth rates by selected demographics. We used birth certificate data and joinpoint regression to examine trends in teen birth rates by age (10-14, 15-17, and 18-19 years) and race during 1981-2006 and by age and Hispanic origin during 1990-2006. Joinpoint analysis describes changing trends over successive segments of time and uses annual percentage change (APC) to express the amount of increase or decrease within each segment. For teens younger than 18 years, the decline in birth rates began in 1994 and ended in 2003 (APC: -8.03% per year for ages 10-14 years; APC: -5.63% per year for ages 15-17 years). The downward trend for 18- and 19-year-old teens began earlier (1991) and ended 1 year later (2004) (APC: -2.37% per year). For each study population, the trend was approximately level during the most recent time segment, except for continuing declines for 18- and 19-year-old white and Asian/Pacific Islander teens. The only increasing trend in the most recent time segment was for 18- and 19-year-old Hispanic teens. During these declines, the age distribution of teens who gave birth shifted to slightly older ages, and the percentage whose current birth was at least their second birth decreased. Teen birth rates were generally level during 2003/2004-2006 after the long-term declines. Rates increased among older Hispanic teens. These results indicate a need for renewed attention to effective teen pregnancy prevention programs in specific populations. Copyright © 2011. Published by Elsevier Inc.
Thermal aspects of vehicle comfort.
Holmér, I; Nilsson, H; Bohm, M; Norén, O
1995-07-01
The combined thermal effects of convection, radiation and conduction in a vehicle compartment need special measuring equipment accounting for spatial and temporal variations in the driver space. The most sophisticated equipment measures local heat fluxes at defined spots or areas of a man-shaped manikin. Manikin segment heat fluxes have been measured in a variety of vehicle climatic conditions (heat, cold, solar radiation etc.) and compared with thermal sensation votes and physiological responses of subjects exposed to the same conditions. High correlation was found for segment fluxes and mean thermal vote (MTV) of subjects for the same body segments. By calibrating the manikin under homogenous, wind still conditions, heat fluxes could be converted (and normalised) to an equivalent homogenous temperature (EHT). Regression of MTV-values on EHT-values was used as basis for the derivation of a comfort profile, specifying acceptable temperature ranges for 19 different body segments. The method has been used for assessment of the thermal climate in trucks and crane cabins in winter and summer conditions. The possibility for spatial resolution of thermal influences (e.g. by solar radiation or convection currents) appeared to be very useful in the analysis of system performance. Ventilation of driver's seats is a technical solution to reducing insulation of thigh, seat and back areas of the body. Constructions, however, may vary in efficiency. In one system seat ventilation allowed for almost 2 degrees C higher ambient conditions for unchanged general thermal sensation, in addition to the pronounced local effect. In a recent study the effects of various technical measures related to cabin design and HVAC-systems have been investigated.(ABSTRACT TRUNCATED AT 250 WORDS)
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying
2009-01-01
Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817
A parametric ribcage geometry model accounting for variations among the adult population.
Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen
2016-09-06
The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.
Color normalization for robust evaluation of microscopy images
NASA Astrophysics Data System (ADS)
Švihlík, Jan; Kybic, Jan; Habart, David
2015-09-01
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie
2017-10-01
By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.
Niehues, Stefan M; Unger, J K; Malinowski, M; Neymeyer, J; Hamm, B; Stockmann, M
2010-08-20
Volumetric assessment of the liver regularly yields discrepant results between pre- and intraoperatively determined volumes. Nevertheless, the main factor responsible for this discrepancy remains still unclear. The aim of this study was to systematically determine the difference between in vivo CT-volumetry and ex vivo volumetry in a pig animal model. Eleven pigs were studied. Liver density assessment, CT-volumetry and water displacement volumetry was performed after surgical removal of the complete liver. Known possible errors of volume determination like resection or segmentation borders were eliminated in this model. Regression analysis was performed and differences between CT-volumetry and water displacement determined. Median liver density was 1.07g/ml. Regression analysis showed a high correlation of r(2) = 0.985 between CT-volumetry and water displacement. CT-volumetry was found to be 13% higher than water displacement volumetry (p<0.0001). In this study the only relevant factor leading to the difference between in vivo CT-volumetry and ex vivo water displacement volumetry seems to be blood perfusion of the liver. The systematic difference of 13 percent has to be taken in account when dealing with those measures.
Wei, Zhenbo; Wang, Jun; Ye, Linshuang
2011-08-15
A voltammetric electronic tongue (VE-tongue) was developed to discriminate the difference between Chinese rice wines in this research. Three types of Chinese rice wine with different marked ages (1, 3, and 5 years) were classified by the VE-tongue by principal component analysis (PCA) and cluster analysis (CA). The VE-tongue consisted of six working electrodes (gold, silver, platinum, palladium, tungsten, and titanium) in a standard three-electrode configuration. The multi-frequency large amplitude pulse voltammetry (MLAPV), which consisted of four segments of 1 Hz, 10 Hz, 100 Hz, and 1000 Hz, was applied as the potential waveform. The three types of Chinese rice wine could be classified accurately by PCA and CA, and some interesting regularity is shown in the score plots with the help of PCA. Two regression models, partial least squares (PLS) and back-error propagation-artificial neural network (BP-ANN), were used for wine age prediction. The regression results showed that the marked ages of the three types of Chinese rice wine were successfully predicted using PLS and BP-ANN. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-01
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-07
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Rothenfluh, Dominique A; Mueller, Daniel A; Rothenfluh, Esin; Min, Kan
2015-06-01
Several risk factors and causes of adjacent segment disease have been debated; however, no quantitative relationship to spino-pelvic parameters has been established so far. A retrospective case-control study was carried out to investigate spino-pelvic alignment in patients with adjacent segment disease compared to a control group. 45 patients (ASDis) were identified that underwent revision surgery for adjacent segment disease after on average 49 months (7-125), 39 patients were selected as control group (CTRL) similar in the distribution of the matching variables, such as age, gender, preoperative degenerative changes, and numbers of segments fused with a mean follow-up of 84 months (61-142) (total n = 84). Several radiographic parameters were measured on pre- and postoperative radiographs, including lumbar lordosis measured (LL), sacral slope, pelvic incidence (PI), and tilt. Significant differences between ASDis and CTRL groups on preoperative radiographs were seen for PI (60.9 ± 10.0° vs. 51.7 ± 10.4°, p = 0.001) and LL (48.1 ± 12.5° vs. 53.8 ± 10.8°, p = 0.012). Pelvic incidence was put into relation to lumbar lordosis by calculating the difference between pelvic incidence and lumbar lordosis (∆PILL = PI-LL, ASDis 12.5 ± 16.7° vs. CTRL 3.4 ± 12.1°, p = 0.001). A cutoff value of 9.8° was determined by logistic regression and ROC analysis and patients classified into a type A (∆PILL <10°) and a type B (∆PILL ≥10°) alignment according to pelvic incidence-lumbar lordosis mismatch. In type A spino-pelvic alignment, 25.5 % of patients underwent revision surgery for adjacent segment disease, whereas 78.3 % of patients classified as type B alignment had revision surgery. Classification of patients into type A and B alignments yields a sensitivity for predicting adjacent segment disease of 71 %, a specificity of 81 % and an odds ratio of 10.6. In degenerative disease of the lumbar spine a high pelvic incidence with diminished lumbar lordosis seems to predispose to adjacent segment disease. Patients with such pelvic incidence-lumbar lordosis mismatch exhibit a 10-times higher risk for undergoing revision surgery than controls if sagittal malalignment is maintained after lumbar fusion surgery.
Takase, Bonpei; Masaki, Nobuyuki; Hattori, Hidemi; Ishihara, Masayuki; Kurita, Akira
2009-06-01
The electrocardiographic index of QT dispersion (QTd) is related to the occurrence of arrhythmia. In patients with suspected or known coronary artery disease, QTd may be affected by exercise. We investigated whether QTd that is automatically calculated by a newly developed computer system could be used as a marker of exercise-induced myocardial ischemia. The design of this study was prospective and observational. Eighty-three consecutive patients were enrolled in this study. Their QTd was measured at rest and after 3 min of exercise during exercise-stress Thallium-201 scintigraphy and compared with conventional ST-segment changes. The patients were classified into 4 groups (normal group, redistribution group, fixed defect group, redistribution with fixed defect group) based on the result of single photon emission computed tomography. As statistical analysis, one-way ANOVA with post-hoc Scheffe's method, receiver-operating characteristics (ROC) and multiple logistic regression analysis were performed. At rest, QTd was significantly greater (p<0.05) in the fixed defect group (52+/-21 ms) and the redistribution with fixed defect group (53+/-20 ms) than in the normal group (32+/-14 ms) and the redistribution group (31+/-16 ms). However, QTd tended to increase after exercise in the redistribution group, while QTd tended to decrease in the normal group, the fixed defect group, and the redistribution with fixed defect group (QTd after exercise, normal group, 28+/-17 ms, redistribution group, 35+/-19 ms, fixed defect group, 43+/-25 ms, redistribution with fixed defect group, 49+/-27 ms). Exercise significantly increased QTcd (RR interval-corrected QT dispersion) in the redistribution group. The best cut-off values of QTd and QTcd obtained from ROC curves for exercise-induced myocardial ischemia were 41.6 ms and 40.4 ms, respectively (Qtd--AUC 0.68, 95%CI 0.53- 0.83 and QTcd--AUC 0.67, 95%CI 0.55-0.80). Using these values as cut-off ones, QTd, QTcd, and conventional ST-segment change had comparable sensitivities and specificities for detecting exercise-induced myocardial ischemia (sensitivity - 60%, 58% and 49%, respectively;specificity - 78%, 80% and 83%, respectively). In addition, multiple logistic regression analysis showed that QTd (OR=2.01, 95%CI 1.15-4.10, p<0.05), QTcd (OR=2.12, 95% CI 1.02-4.30, p<0.05) and ST-segment change (OR=1.89, 95%CI 1.03-3.40, p<0.05), were the significantly associated with exercise-induced myocardial ischemia. QT dispersion and/or QTcd after exercise could be a useful marker for exercise-induced myocardial ischemia in routine clinical practice.
Choi, Yeon-Ju; Son, Wonsoo; Park, Ki-Su
2016-01-01
Objective This study used the intradural procedural time to assess the overall technical difficulty involved in surgically clipping an unruptured middle cerebral artery (MCA) aneurysm via a pterional or superciliary approach. The clinical and radiological variables affecting the intradural procedural time were investigated, and the intradural procedural time compared between a superciliary keyhole approach and a pterional approach. Methods During a 5.5-year period, patients with a single MCA aneurysm were enrolled in this retrospective study. The selection criteria for a superciliary keyhole approach included : 1) maximum diameter of the unruptured MCA aneurysm <15 mm, 2) neck diameter of the MCA aneurysm <10 mm, and 3) aneurysm location involving the sphenoidal or horizontal segment of MCA (M1) segment and MCA bifurcation, excluding aneurysms distal to the MCA genu. Meanwhile, the control comparison group included patients with the same selection criteria as for a superciliary approach, yet who preferred a pterional approach to avoid a postoperative facial wound or due to preoperative skin trouble in the supraorbital area. To determine the variables affecting the intradural procedural time, a multiple regression analysis was performed using such data as the patient age and gender, maximum aneurysm diameter, aneurysm neck diameter, and length of the pre-aneurysm M1 segment. In addition, the intradural procedural times were compared between the superciliary and pterional patient groups, along with the other variables. Results A total of 160 patients underwent a superciliary (n=124) or pterional (n=36) approach for an unruptured MCA aneurysm. In the multiple regression analysis, an increase in the diameter of the aneurysm neck (p<0.001) was identified as a statistically significant factor increasing the intradural procedural time. A Pearson correlation analysis also showed a positive correlation (r=0.340) between the neck diameter and the intradural procedural time. When comparing the superciliary and pterional groups, no statistically significant between-group difference was found in terms of the intradural procedural time reflecting the technical difficulty (mean±standard deviation : 29.8±13.0 min versus 27.7±9.6 min). Conclusion A superciliary keyhole approach can be a useful alternative to a pterional approach for an unruptured MCA aneurysm with a maximum diameter <15 mm and neck diameter <10 mm, representing no more of a technical challenge. For both surgical approaches, the technical difficulty increases along with the neck diameter of the MCA aneurysm. PMID:27847568
Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.
Sharma, Shefali; Calixte, Rose; Finamore, Peter S
2016-01-01
To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
NASA Astrophysics Data System (ADS)
Jensen, Robert K.; Fletcher, P.; Abraham, C.
1991-04-01
The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.
Automatic tracking of labeled red blood cells in microchannels.
Pinho, Diana; Lima, Rui; Pereira, Ana I; Gayubo, Fernando
2013-09-01
The current study proposes an automatic method for the segmentation and tracking of red blood cells flowing through a 100- μm glass capillary. The original images were obtained by means of a confocal system and then processed in MATLAB using the Image Processing Toolbox. The measurements obtained with the proposed automatic method were compared with the results determined by a manual tracking method. The comparison was performed by using both linear regressions and Bland-Altman analysis. The results have shown a good agreement between the two methods. Therefore, the proposed automatic method is a powerful way to provide rapid and accurate measurements for in vitro blood experiments in microchannels. Copyright © 2012 John Wiley & Sons, Ltd.
Nonlinear estimation of parameters in biphasic Arrhenius plots.
Puterman, M L; Hrboticky, N; Innis, S M
1988-05-01
This paper presents a formal procedure for the statistical analysis of data on the thermotropic behavior of membrane-bound enzymes generated using the Arrhenius equation and compares the analysis to several alternatives. Data is modeled by a bent hyperbola. Nonlinear regression is used to obtain estimates and standard errors of the intersection of line segments, defined as the transition temperature, and slopes, defined as energies of activation of the enzyme reaction. The methodology allows formal tests of the adequacy of a biphasic model rather than either a single straight line or a curvilinear model. Examples on data concerning the thermotropic behavior of pig brain synaptosomal acetylcholinesterase are given. The data support the biphasic temperature dependence of this enzyme. The methodology represents a formal procedure for statistical validation of any biphasic data and allows for calculation of all line parameters with estimates of precision.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.
2006-01-01
The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.
The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim
2008-01-01
Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.
Structural analysis of vibroacoustical processes
NASA Technical Reports Server (NTRS)
Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.
1973-01-01
The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.
Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.
2013-01-01
External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178
Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie
2018-05-01
In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.
ARCOCT: Automatic detection of lumen border in intravascular OCT images.
Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos
2017-11-01
Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.
Miyagi, Atsushi
2017-09-01
Detailed exploration of sensory perception as well as preference across gender and age for a certain food is very useful for developing a vendible food commodity related to physiological and psychological motivation for food preference. Sensory tests including color, sweetness, bitterness, fried peanut aroma, textural preference and overall liking of deep-fried peanuts with varying frying time (2, 4, 6, 9, 12 and 15 min) at 150 °C were carried out using 417 healthy Japanese consumers. To determine the influence of gender and age on sensory evaluation, systematic statistical analysis including one-way analysis of variance, polynomial regression analysis and multiple regression analysis was conducted using the collected data. The results indicated that females were more sensitive to bitterness than males. This may affect sensory preference; female subjects favored peanuts prepared with a shorter frying time more than male subjects did. With advancing age, textural preference played a more important role in overall preference. Older subjects liked deeper-fried peanuts, which are more brittle, more than younger subjects did. In the present study, systematic statistical analysis based on collected sensory evaluation data using deep-fried peanuts was conducted and the tendency of sensory perception and preference across gender and age was clarified. These results may be useful for engineering optimal strategies to target specific segments to gain greater acceptance in the market. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Awad, Joseph; Owrangi, Amir; Villemaire, Lauren; O'Riordan, Elaine; Parraga, Grace; Fenster, Aaron
2012-02-01
Manual segmentation of lung tumors is observer dependent and time-consuming but an important component of radiology and radiation oncology workflow. The objective of this study was to generate an automated lung tumor measurement tool for segmentation of pulmonary metastatic tumors from x-ray computed tomography (CT) images to improve reproducibility and decrease the time required to segment tumor boundaries. The authors developed an automated lung tumor segmentation algorithm for volumetric image analysis of chest CT images using shape constrained Otsu multithresholding (SCOMT) and sparse field active surface (SFAS) algorithms. The observer was required to select the tumor center and the SCOMT algorithm subsequently created an initial surface that was deformed using level set SFAS to minimize the total energy consisting of mean separation, edge, partial volume, rolling, distribution, background, shape, volume, smoothness, and curvature energies. The proposed segmentation algorithm was compared to manual segmentation whereby 21 tumors were evaluated using one-dimensional (1D) response evaluation criteria in solid tumors (RECIST), two-dimensional (2D) World Health Organization (WHO), and 3D volume measurements. Linear regression goodness-of-fit measures (r(2) = 0.63, p < 0.0001; r(2) = 0.87, p < 0.0001; and r(2) = 0.96, p < 0.0001), and Pearson correlation coefficients (r = 0.79, p < 0.0001; r = 0.93, p < 0.0001; and r = 0.98, p < 0.0001) for 1D, 2D, and 3D measurements, respectively, showed significant correlations between manual and algorithm results. Intra-observer intraclass correlation coefficients (ICC) demonstrated high reproducibility for algorithm (0.989-0.995, 0.996-0.997, and 0.999-0.999) and manual measurements (0.975-0.993, 0.985-0.993, and 0.980-0.992) for 1D, 2D, and 3D measurements, respectively. The intra-observer coefficient of variation (CV%) was low for algorithm (3.09%-4.67%, 4.85%-5.84%, and 5.65%-5.88%) and manual observers (4.20%-6.61%, 8.14%-9.57%, and 14.57%-21.61%) for 1D, 2D, and 3D measurements, respectively. The authors developed an automated segmentation algorithm requiring only that the operator select the tumor to measure pulmonary metastatic tumors in 1D, 2D, and 3D. Algorithm and manual measurements were significantly correlated. Since the algorithm segmentation involves selection of a single seed point, it resulted in reduced intra-observer variability and decreased time, for making the measurements.
The effects of new pricing and copayment schemes for pharmaceuticals in South Korea.
Lee, Iyn-Hyang; Bloor, Karen; Hewitt, Catherine; Maynard, Alan
2012-01-01
This study examined the effect of new Korean pricing and copayment schemes for pharmaceuticals (1) on per patient drug expenditure, utilisation and unit prices of overall pharmaceuticals; (2) on the utilisation of essential medications and (3) on the utilisation of less costly alternatives to the study medication. Interrupted time series analysis using retrospective observational data. The increasing trend of per patient drug expenditure fell gradually after the introduction of a new copayment scheme. The segmented regression model suggested that per patient drug expenditure might decrease by about 12% 1 year after the copayment increase, compared with the absence of such a policy, with few changes in overall utilisation and unit prices. The level of savings was much smaller when the new price scheme was included, while the effects of a price cut were inconclusive due to the short time period before an additional policy change. Based on the segmented regression models, we estimate that the number of patients filling their antihyperlipidemics prescriptions decreased by 18% in the corresponding period. Those prescribed generic and brand-named antihyperlipidemics declined by around 16 and 19%, respectively, indicating little evidence of generic substitution resulting from the copayment increase. Few changes were found in the use of antihypertensives. The policies under consideration appear to contain costs not by the intended mechanisms, such as substituting generics for brand name products, but by reducing patients' access to costly therapies regardless of clinical necessity. Thus, concerns were raised about potentially compromising overall health and loss of equity in pharmaceutical utilisation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
Bonander, Carl; Gustavsson, Johanna; Nilson, Finn
2016-12-01
Fall-related injuries are a global public health problem, especially in elderly populations. The effect of an intervention aimed at reducing the risk of falls in the homes of community-dwelling elderly persons was evaluated. The intervention mainly involves the performance of complicated tasks and hazards assessment by a trained assessor, and has been adopted gradually over the last decade by 191 of 290 Swedish municipalities. A quasi-experimental design was used where intention-to-treat effect estimates were derived using panel regression analysis and a regression discontinuity (RD) design. The outcome measure was the incidence of fall-related hospitalisations in the treatment population, the age of which varied by municipality (≥65 years, ≥67 years, ≥70 years or ≥75 years). We found no statistically significant reductions in injury incidence in the panel regression (IRR 1.01 (95% CI 0.98 to 1.05)) or RD (IRR 1.00 (95% CI 0.97 to 1.03)) analyses. The results are robust to several different model specifications, including segmented panel regression analysis with linear trend change and community fixed effects parameters. It is unclear whether the absence of an effect is due to a low efficacy of the services provided, or a result of low adherence. Additional studies of the effects on other quality-of-life measures are recommended before conclusions are drawn regarding the cost-effectiveness of the provision of home help service programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
WE-E-17A-06: Assessing the Scale of Tumor Heterogeneity by Complete Hierarchical Segmentation On MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gensheimer, M; Trister, A; Ermoian, R
2014-06-15
Purpose: In many cancers, intratumoral heterogeneity exists in vascular and genetic structure. We developed an algorithm which uses clinical imaging to interrogate different scales of heterogeneity. We hypothesize that heterogeneity of perfusion at large distance scales may correlate with propensity for disease recurrence. We applied the algorithm to initial diagnosis MRI of rhabdomyosarcoma patients to predict recurrence. Methods: The Spatial Heterogeneity Analysis by Recursive Partitioning (SHARP) algorithm recursively segments the tumor image. The tumor is repeatedly subdivided, with each dividing line chosen to maximize signal intensity difference between the two subregions. This process continues to the voxel level, producing segmentsmore » at multiple scales. Heterogeneity is measured by comparing signal intensity histograms between each segmented region and the adjacent region. We measured the scales of contrast enhancement heterogeneity of the primary tumor in 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival (RFS). To compare with existing methods, fractal and Haralick texture features were also calculated. Results: The complete segmentation produced by SHARP allows extraction of diverse features, including the amount of heterogeneity at various distance scales, the area of the tumor with the most heterogeneity at each scale, and for a given point in the tumor, the heterogeneity at different scales. 10/18 rhabdomyosarcoma patients suffered disease recurrence. On contrast-enhanced MRI, larger scale of maximum signal intensity heterogeneity, relative to tumor diameter, predicted for shorter RFS (p=0.05). Fractal dimension, fractal fit, and three Haralick features did not predict RFS (p=0.09-0.90). Conclusion: SHARP produces an automatic segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. In rhabdomyosarcoma, RFS was shorter when the primary tumor exhibited larger scale of heterogeneity on contrast-enhanced MRI. If validated on a larger dataset, this imaging biomarker could be useful to help personalize treatment.« less
Verani, M S; Taillefer, R; Iskandrian, A E; Mahmarian, J J; He, Z X; Orlandi, C
2000-08-01
Fatty acids are the prime metabolic substrate for myocardial energy production. Hence, fatty acid imaging may be useful in the assessment of myocardial hibernation. The goal of this prospective, multicenter trial was to assess the use of a fatty acid, 123I-iodophenylpentadecanoic acid (IPPA), to identify viable, hibernating myocardium. Patients (n = 119) with abnormal left ventricular wall motion and a left ventricular ejection fraction (LVEF) < 40% who were already scheduled to undergo coronary artery bypass grafting (CABG) underwent IPPA tomography (rest and 30-min redistribution) and blood-pool radionuclide angiography within 3 d of the scheduled operation. Radionuclide angiography was repeated 6-8 wk after CABG. The study endpoint was a > or =10% increase in LVEF after CABG. The number of IPPA-viable abnormally contracting segments necessary to predict a positive LVEF outcome was determined by receiver operating characteristic (ROC) curves and was included in a logistic regression analysis, together with selected clinical variables. Before CABG, abnormal IPPA tomography findings were seen in 113 of 119 patients (95%), of whom 71 (60%) had redistribution in the 30-min images. The LVEF increased modestly after CABG (from 32% +/- 12% to 36% +/- 8%, P< 0.001).A > or =10% increase in LVEF after CABG occurred in 27 of 119 patients (23%). By ROC curves, the best predictor of a > or =10% increase in LVEF was the presence of > or =7 IPPA-viable segments (accuracy, 72%; confidence interval, 64%-80%). Among clinical and scintigraphic variables, the single most important predictor also was the number of IPPA-viable segments (P = 0.008). The number of IPPA-viable segments added significant incremental value to the best clinical predictor model. Asubstantial increase in LVEF occurs after CABG in only a minority of patients (23%) with depressed preoperative function. The number of IPPA-viable segments is useful in predicting a clinically meaningful increase in LVEF.
2006-10-01
lead to false positive segmental hair analysis results.13 Due to the increased risk of false positives associated with segmental hair analysis ...to 200 mg of hair (to allow confirmation testing). 7 The segments are typically washed to remove external contaminants and the chemicals in the hair ...further confirmation. The method overcomes the false positives associated with traditional segmental hair analysis such. By measuring the
Cazalas, G; Sarran, A; Amabile, N; Chaumoitre, K; Marciano-Chagnaud, S; Jacquier, A; Paganelli, F; Panuel, M
2009-09-01
To determine the accuracy of 64 MDCT coronary CTA (CCTA) compared to coronary angiography in low risk patients with stable angina and acute coronary syndrome and determine the number of significant coronary artery stenoses ( 50%) in these patients. Materials and methods. Fifty-five patients underwent CCTA using a 32 MDCT unit with z flying focus allowing the acquisition of 64 slices of 0.6 mm thickness as well as coronary angiography (gold standard). Nine patients were excluded due to prior coronary artery bypass surgery (n=4), insufficient breath hold (n=3), calcium scoring>1000 (n=1) and delay between both examinations over 4 months (n=1). Forty-six patients: 27 males and 19 females were included. CCTA results were compared to coronary angiography per segment and artery with threshold detection of stenoses 50%. The degree of correlation between both examinations was performed using a regression analysis with a Pearson correlation coefficient<0.05 considered significant. The overall accuracy of CCTA was 90%; limitations related to the presence of calcifications, motion artifacts or insufficient vessel opacification. The correlation for all analyzed segments was 96.4%. Thirty-eight of 50 significant stenoses seen on coronary angiography were correctly detected on CCTA. Sensitivity, specificity, PPVC and NPV for detection of stenoses 50% were 76%, 98.3%, 80.3% and 97.7% respectively. Evaluation per segment had a NPV of 96.8% (interventricular and diagonal segments) to 100% (main trunk). Our results for specificity and NPV are similar to reports from the literature. This suggests that CCTA in this clinical setting may replace coronary angiography.
Estimating average annual per cent change in trend analysis
Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K
2009-01-01
Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324
2012-01-01
Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543
Brain tumor segmentation based on local independent projection-based classification.
Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin
2014-10-01
Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
NASA Astrophysics Data System (ADS)
Whitney, Dwight E.
The influence of learning in the form of past relevant experience was examined in data collected for strategic ballistic missiles developed by the United States. A total of twenty-four new missiles were developed and entered service between 1954 and 1990. Missile development costs were collected and analyzed by regression analysis using the learning curve model with factors for past experience and other relevant cost estimating relationships. The purpose of the study was to determine whether prior development experience was a factor in the development cost of these like systems. Of the twenty-four missiles in the population, development costs for twelve of the missiles were collected from the literature. Since the costs were found to be segmented by military service, a discrete input variable for military service was used as one of the cost estimating relationships. Because there were only two US Navy samples, too few to analyze for segmentation and learning rate, they were excluded from the final analysis. The final analysis was on a sample of ten out of eighteen US Army and US Air Force missiles within the population. The result of the analysis found past experience to be a statistically significant factor in describing the development cost of the US Army and US Air Force missiles. The influence equated to a 0.86 progress ratio, indicating prior development experience had a positive (cost-reducing) influence on their development cost. Based on the result, it was concluded that prior development experience was a factor in the development cost of these systems.
NASA Astrophysics Data System (ADS)
Spaulding, R. S.; Hales, B.; Beck, J. C.; Degrandpre, M. D.
2008-12-01
The four measurable inorganic carbon parameters commonly measured as part of oceanic carbon cycle studies are total dissolved inorganic carbon (DIC), total alkalinity (AT), hydrogen ion concentration (pH) and partial pressure of CO2 (pCO2). AT determination is critical for anthropogenic CO2 inventory calculations and for quantifying CaCO3 saturation. Additionally, measurement of AT in combination with one other carbonate parameter can be used to describe the inorganic carbon equilibria. Current methods for measuring AT require calibrated volumetric flasks and burettes, gravimetry, or precise flow measurements. These methods also require analysis times of ˜15 min and sample volumes of ˜200 mL, and sample introduction is not automated, resulting in labor-intensive measurements and low temporal resolution. The Tracer Monitored Titration (TMT) system was previously developed at the University of Montana for AT measurements. The TMT is not dependent on accurate gravimetric, volumetric or flow rate measurements because it relies on a pH-sensitive indicator (tracer) to track the amount of titrant added to the sample. Sample and a titrant-indicator mixture are mechanically stirred in an optical flow cell and pH is calculated using the indicator equilibrium constant and the spectrophotometrically determined concentrations of the acid and base forms of the indicator. AT is then determined using these data in a non-linear least squares regression of the AT mass and proton balances. The precision and accuracy of the TMT are 2 and 4 micromol per kg in 16 min using 110-mL of sample. The TMT is dependent on complete mixing of titrant with the sample and accurate absorbance measurements. We have developed the segmented-flow TMT (SF- TMT) to improve on these aspects and decrease sample analysis time. The TMT uses segmented flow instead of active mixing and a white LED instead of a tungsten-halogen light source. Air is added to the liquid flow stream, producing segments of liquid separated by air bubbles. Because liquid is not transferred between flow segments, there is rapid flushing which reduces sample volume to <10 mL. Additionally, the slower movement of liquid at the tube walls compared to that at the tube center creates circulation within each liquid segment, mixing the sample and eliminating the need for mechanical stirring. The white LED has higher output at the wavelengths of interest, thus improving the precision of absorbance measurements. These improvements result in a faster, simpler method for measuring AT.
Pretest probability assessment derived from attribute matching
Kline, Jeffrey A; Johnson, Charles L; Pollack, Charles V; Diercks, Deborah B; Hollander, Judd E; Newgard, Craig D; Garvey, J Lee
2005-01-01
Background Pretest probability (PTP) assessment plays a central role in diagnosis. This report compares a novel attribute-matching method to generate a PTP for acute coronary syndrome (ACS). We compare the new method with a validated logistic regression equation (LRE). Methods Eight clinical variables (attributes) were chosen by classification and regression tree analysis of a prospectively collected reference database of 14,796 emergency department (ED) patients evaluated for possible ACS. For attribute matching, a computer program identifies patients within the database who have the exact profile defined by clinician input of the eight attributes. The novel method was compared with the LRE for ability to produce PTP estimation <2% in a validation set of 8,120 patients evaluated for possible ACS and did not have ST segment elevation on ECG. 1,061 patients were excluded prior to validation analysis because of ST-segment elevation (713), missing data (77) or being lost to follow-up (271). Results In the validation set, attribute matching produced 267 unique PTP estimates [median PTP value 6%, 1st–3rd quartile 1–10%] compared with the LRE, which produced 96 unique PTP estimates [median 24%, 1st–3rd quartile 10–30%]. The areas under the receiver operating characteristic curves were 0.74 (95% CI 0.65 to 0.82) for the attribute matching curve and 0.68 (95% CI 0.62 to 0.77) for LRE. The attribute matching system categorized 1,670 (24%, 95% CI = 23–25%) patients as having a PTP < 2.0%; 28 developed ACS (1.7% 95% CI = 1.1–2.4%). The LRE categorized 244 (4%, 95% CI = 3–4%) with PTP < 2.0%; four developed ACS (1.6%, 95% CI = 0.4–4.1%). Conclusion Attribute matching estimated a very low PTP for ACS in a significantly larger proportion of ED patients compared with a validated LRE. PMID:16095534
Harrington, Glenys; Watson, Kerrie; Bailey, Michael; Land, Gillian; Borrell, Susan; Houston, Leanne; Kehoe, Rosaleen; Bass, Pauline; Cockroft, Emma; Marshall, Caroline; Mijch, Anne; Spelman, Denis
2007-07-01
To evaluate the impact of serial interventions on the incidence of methicillin-resistant Staphylococcus aureus (MRSA). Longitudinal observational study before and after interventions. The Alfred Hospital is a 350-bed tertiary referral hospital with a 35-bed intensive care unit (ICU). A series of interventions including the introduction of an antimicrobial hand-hygiene gel to the intensive care unit and a hospitalwide MRSA surveillance feedback program that used statistical process control charts but not active surveillance cultures. Serial interventions were introduced between January 2003 and May 2006. The incidence and rates of new patients colonized or infected with MRSA and episodes of MRSA bacteremia in the intensive care unit and hospitalwide were compared between the preintervention and intervention periods. Segmented regression analysis was used to calculate the percentage reduction in new patients with MRSA and in episodes of MRSA bacteremia hospitalwide in the intervention period. The rate of new patients with MRSA in the ICU was 6.7 cases per 100 patient admissions in the intervention period, compared with 9.3 cases per 100 patient admissions in the preintervention period (P=.047). The hospitalwide rate of new patients with MRSA was 1.7 cases per 100 patient admissions in the intervention period, compared with 3.0 cases per 100 patient admissions in the preintervention period (P<.001). By use of segmented regression analysis, the maximum and conservative estimates for percentage reduction in the rate of new patients with MRSA were 79.5% and 42.0%, respectively, and the maximum and conservative estimates for percentage reduction in the rate of episodes of MRSA bacteremia were 87.4% and 39.0%, respectively. A sustained reduction in the number of new patients with MRSA colonization or infection has been demonstrated using minimal resources and a limited number of interventions.
Peiyuan, He; Jingang, Yang; Haiyan, Xu; Xiaojin, Gao; Ying, Xian; Yuan, Wu; Wei, Li; Yang, Wang; Xinran, Tang; Ruohua, Yan; Chen, Jin; Lei, Song; Xuan, Zhang; Rui, Fu; Yunqing, Ye; Qiuting, Dong; Hui, Sun; Xinxin, Yan; Runlin, Gao; Yuejin, Yang
2016-01-01
Only a few randomized trials have analyzed the clinical outcomes of elderly ST-segment elevation myocardial infarction (STEMI) patients (≥ 75 years old). Therefore, the best reperfusion strategy has not been well established. An observational study focused on clinical outcomes was performed in this population. Based on the national registry on STEMI patients, the in-hospital outcomes of elderly patients with different reperfusion strategies were compared. The primary endpoint was defined as death. Secondary endpoints included recurrent myocardial infarction, ischemia driven revascularization, myocardial infarction related complications, and major bleeding. Multivariable regression analysis was performed to adjust for the baseline disparities between the groups. Patients who had primary percutaneous coronary intervention (PCI) or fibrinolysis were relatively younger. They came to hospital earlier, and had lower risk of death compared with patients who had no reperfusion. The guideline recommended medications were more frequently used in patients with primary PCI during the hospitalization and at discharge. The rates of death were 7.7%, 15.0%, and 19.9% respectively, with primary PCI, fibrinolysis, and no reperfusion (P < 0.001). Patients having primary PCI also had lower rates of heart failure, mechanical complications, and cardiac arrest compared with fibrinolysis and no reperfusion (P < 0.05). The rates of hemorrhage stroke (0.3%, 0.6%, and 0.1%) and other major bleeding (3.0%, 5.0%, and 3.1%) were similar in the primary PCI, fibrinolysis, and no reperfusion group (P > 0.05). In the multivariable regression analysis, primary PCI outweighs no reperfusion in predicting the in-hospital death in patients ≥ 75 years old. However, fibrinolysis does not. Early reperfusion, especially primary PCI was safe and effective with absolute reduction of mortality compared with no reperfusion. However, certain randomized trials were encouraged to support the conclusion.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
ERIC Educational Resources Information Center
Lay, Robert S.
The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…
Eye Movements Reveal the Influence of Event Structure on Reading Behavior.
Swets, Benjamin; Kurby, Christopher A
2016-03-01
When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading. Copyright © 2015 Cognitive Science Society, Inc.
Prostate malignancy grading using gland-related shape descriptors
NASA Astrophysics Data System (ADS)
Braumann, Ulf-Dietrich; Scheibe, Patrick; Loeffler, Markus; Kristiansen, Glen; Wernert, Nicolas
2014-03-01
A proof-of-principle study was accomplished assessing the descriptive potential of two simple geometric measures (shape descriptors) applied to sets of segmented glands within images of 125 prostate cancer tissue sections. Respective measures addressing glandular shapes were (i) inverse solidity and (ii) inverse compactness. Using a classifier based on logistic regression, Gleason grades 3 and 4/5 could be differentiated with an accuracy of approx. 95%. Results suggest not only good discriminatory properties, but also robustness against gland segmentation variations. False classifications in part were caused by inadvertent Gleason grade assignments, as a-posteriori re-inspections had turned out.
Yu, X-R; Huang, W-Y; Zhang, B-Y; Li, H-Q; Geng, D-Y
2014-06-01
To retrospectively evaluate the criteria for discriminating infiltrative cholangiocarcinoma from benign common bile duct (CBD) stricture using three-dimensional dynamic contrast-enhanced (3D-DCE) magnetic resonance imaging (MRI) combined with magnetic resonance cholangiopancreatography (MRCP) imaging and to determine the predictors for cholangiocarcinoma versus benign CBD stricture. 3D-DCE MRI and MRCP images in 28 patients with infiltrative cholangiocarcinoma and 23 patients with benign causes of CBD stricture were reviewed retrospectively. The final diagnosis was based on surgical or biopsy records. Two radiologists analysed the MRI images for asymmetry, including the wall thickness, length, and enhancement pattern of the narrowed CBD segment, and upstream CBD dilatation. MRI findings that could be used as predictors were identified by univariate analysis and multivariable stepwise logistic regression analysis. Malignant strictures were significantly thicker (4.4 ± 1.2 mm) and longer (16.7 ± 7.7 mm) than the benign strictures (p < 0.05), and upstream CBD dilatation was larger in the infiltrative cholangiocarcinoma cases (20.7 ± 5.7 mm) than in the benign cases (16.5 ± 5.2 mm; p = 0.018). During both the portal venous and equilibrium phases, hyperenhancement was more frequently observed in malignant cases than in benign cases (p < 0.001). The results of the multivariable stepwise logistic regression analysis showed that both hyperenhancement of the involved CBD during the equilibrium phase and the ductal thickness were significant predictors for malignant strictures. When two diagnostic predictive values were used in combination, almost all patients with malignant strictures (n = 26, 92.9%) and benign strictures (n = 21, 91.3%) were correctly identified; the overall accuracy was 92.2% with correct classifications in 47 of the 51 patients. Infiltrative cholangiocarcinoma and benign CBD strictures could be effectively differentiated using DCE-MRI and MRCP based on hyperenhancement during the equilibrium phase and bile wall thickness of the involved segment. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Dogan, Soner; Duivenvoorden, Raphaël; Grobbee, Diederick E; Kastelein, John J P; Shear, Charles L; Evans, Gregory W; Visseren, Frank L; Bots, Michiel L
2010-05-01
Ultrasound protocols to measure carotid intima media thickness (CIMT) differ considerably with regard to the inclusion of the number of carotid segments and angles used. Detailed information on the completeness of CIMT information is often lacking in published reports, and at most, overall percentages are presented. We therefore decided to study the completeness of CIMT measurements and its relation with vascular risk factors using data from two CIMT intervention studies: one among familial hypercholesterolemia (FH) patients, the Rating Atherosclerotic Disease change by Imaging With A New CETP Inhibitor (RADIANCE 1), and one among mixed dyslipidemia (MD) patients, the Rating Atherosclerotic Disease change by Imaging With A New CETP Inhibitor (RADIANCE 2). We used baseline ultrasound scans from the RADIANCE 1 (n=872) and RADIANCE 2 (n=752) studies. CIMT images were recorded for 12 artery-wall combinations (near and far walls of the left and right common carotid artery (CCA), bifurcation (BIF) and internal carotid artery (ICA) segments) at 4 set angles, resulting in 48 possible measurements per patient. The presence or absence of CIMT measurements was assessed per artery-wall combination and per angle. The relation between completeness and patient characteristics was evaluated with logistic regression analysis. In 89% of the FH patients, information on CIMT could be obtained on all twelve carotid segments, and in 7.6%, eleven segments had CIMT information (nearly complete 96.6%). For MD patients this was 74.6% and 17.9%, respectively (nearly complete: 92.5%). Increased body mass index and increased waist circumference were significantly (p=0.01) related to less complete data in FH patients. For MD patients, relations were seen with increased waist circumference (p<0.01). Segment-specific data indicated that in FH patients, completeness was less for the near wall of the left (96%) and right internal carotid artery (94%) as compared to other segments (all >98%). In MD patients, completeness was lower for the near wall of both the right and left carotid arteries: 86.0% and 90.8%, respectively, as compared to other segments (all >97%). With the current ultrasound protocols it is possible to obtain a very high level of completeness. Apart from the population studied, body mass index and waist circumference are important in achieving complete CIMT measurements.
NASA Astrophysics Data System (ADS)
Hasanuddin; Setyawan, A.; Yulianto, B.
2018-03-01
Assessment to the performance of road pavement is deemed necessary to improve the management quality of road maintenance and rehabilitation. This research to evaluate the road base on functional and structural and recommendations handling done. Assessing the pavement performance is conducted with functional and structural evaluation. Functional evaluation of pavement is based on the value of IRI (International Roughness Index) which among others is derived from reading NAASRA for analysis and recommended road handling. Meanwhile, structural evaluation of pavement is done by analyzing deflection value based on FWD (Falling Weight Deflectometer) data resulting in SN (Structural Number) value. The analysis will result in SN eff (Structural Number Effective) and SN f (Structural Number Future) value obtained from comparing SN eff to SN f value that leads to SCI (Structural Condition Index) value. SCI value implies the possible recommendation for handling pavement. The study done to Simpang Tuan-Batas Kota Jambi road segment was based on functional analysis. The study indicated that the road segment split into 12 segments in which segment 1, 3, 5, 7, 9, and 11 were of regular maintenance, segment 2, 4, 8, 10, 12 belonged to periodic maintenance, and segment 6 was of rehabilitation. The structural analysis resulted in 8 segments consisting of segment 1 and 2 recommended for regular maintenance, segment 3, 4, 5, and 7 for functional overlay, and 6 and 8 were of structural overlay.
[Segment analysis of the target market of physiotherapeutic services].
Babaskin, D V
2010-01-01
The objective of the present study was to demonstrate the possibilities to analyse selected segments of the target market of physiotherapeutic services provided by medical and preventive-facilities of two major types. The main features of a target segment, such as provision of therapeutic massage, are illustrated in terms of two characteristics, namely attractiveness to the users and the ability of a given medical facility to satisfy their requirements. Based on the analysis of portfolio of the available target segments the most promising ones (winner segments) were selected for further marketing studies. This choice does not exclude the possibility of involvement of other segments of medical services in marketing activities.
NASA Astrophysics Data System (ADS)
Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay
2017-11-01
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
NASA Astrophysics Data System (ADS)
Gonzalez-Correa, C. H.; Caicedo-Eraso, J. C.; Varon-Serna, D. R.
2013-04-01
The mechanical function and size of a muscle may be closely linked. Handgrip strength (HGS) has been used as a predictor of functional performing. Anthropometric measurements have been made to estimate arm muscle area (AMA) and physical muscle mass volume of upper limb (ULMMV). Electrical volume estimation is possible by segmental BIA measurements of fat free mass (SBIA-FFM), mainly muscle-mass. Relationship among these variables is not well established. We aimed to determine if physical and electrical muscle mass estimations relate to each other and to what extent HGS is to be related to its size measured by both methods in normal or overweight young males. Regression analysis was used to determine association between these variables. Subjects showed a decreased HGS (65.5%), FFM, (85.5%) and AMA (74.5%). It was found an acceptable association between SBIA-FFM and AMA (r2 = 0.60) and poorer between physical and electrical volume (r2 = 0.55). However, a paired Student t-test and Bland and Altman plot showed that physical and electrical models were not interchangeable (pt<0.0001). HGS showed a very weak association with anthropometric (r2 = 0.07) and electrical (r2 = 0.192) ULMMV showing that muscle mass quantity does not mean muscle strength. Other factors influencing HGS like physical training or nutrition require more research.
Bae, Kyungsoo; Jeon, Kyung Nyeo; Lee, Seung Jun; Kim, Ho Cheol; Ha, Ji Young; Park, Sung Eun; Baek, Hye Jin; Choi, Bo Hwa; Cho, Soo Buem; Moon, Jin Il
2016-11-01
The aim of this study was to determine the relationship between lobar severity of emphysema and lung cancer using automated lobe segmentation and emphysema quantification methods.This study included 78 patients (74 males and 4 females; mean age of 72 years) with the following conditions: pathologically proven lung cancer, available chest computed tomographic (CT) scans for lobe segmentation, and quantitative scoring of emphysema. The relationship between emphysema and lung cancer was analyzed using quantitative emphysema scoring of each pulmonary lobe.The most common location of cancer was the left upper lobe (LUL) (n = 28), followed by the right upper lobe (RUL) (n = 27), left lower lobe (LLL) (n = 13), right lower lobe (RLL) (n = 9), and right middle lobe (RML) (n = 1). Emphysema ratio was the highest in LUL, followed by that in RUL, LLL, RML, and RLL. Multivariate logistic regression analysis revealed that upper lobes (odds ratio: 1.77; 95% confidence interval: 1.01-3.11, P = 0.048) and lobes with emphysema ratio ranked the 1st or the 2nd (odds ratio: 2.48; 95% confidence interval: 1.48-4.15, P < 0.001) were significantly and independently associated with lung cancer development.In emphysema patients, lung cancer has a tendency to develop in lobes with more severe emphysema.
Severity of pulmonary emphysema and lung cancer: analysis using quantitative lobar emphysema scoring
Bae, Kyungsoo; Jeon, Kyung Nyeo; Lee, Seung Jun; Kim, Ho Cheol; Ha, Ji Young; Park, Sung Eun; Baek, Hye Jin; Choi, Bo Hwa; Cho, Soo Buem; Moon, Jin Il
2016-01-01
Abstract The aim of this study was to determine the relationship between lobar severity of emphysema and lung cancer using automated lobe segmentation and emphysema quantification methods. This study included 78 patients (74 males and 4 females; mean age of 72 years) with the following conditions: pathologically proven lung cancer, available chest computed tomographic (CT) scans for lobe segmentation, and quantitative scoring of emphysema. The relationship between emphysema and lung cancer was analyzed using quantitative emphysema scoring of each pulmonary lobe. The most common location of cancer was the left upper lobe (LUL) (n = 28), followed by the right upper lobe (RUL) (n = 27), left lower lobe (LLL) (n = 13), right lower lobe (RLL) (n = 9), and right middle lobe (RML) (n = 1). Emphysema ratio was the highest in LUL, followed by that in RUL, LLL, RML, and RLL. Multivariate logistic regression analysis revealed that upper lobes (odds ratio: 1.77; 95% confidence interval: 1.01–3.11, P = 0.048) and lobes with emphysema ratio ranked the 1st or the 2nd (odds ratio: 2.48; 95% confidence interval: 1.48–4.15, P < 0.001) were significantly and independently associated with lung cancer development. In emphysema patients, lung cancer has a tendency to develop in lobes with more severe emphysema. PMID:27902611
A systematic evaluation of normalization methods in quantitative label-free proteomics.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2018-01-01
To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.
Analysis of Regional Effects on Market Segment Production
2016-06-01
REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION by James D. Moffitt June 2016 Thesis Advisor: Lyn R. Whitaker Co-Advisor: Jonathan K. Alt...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ANALYSIS OF REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION 5. FUNDING NUMBERS 6...accessions in Potential Rating Index Zip Code Market New Evolution (PRIZM NE) market segments. This model will aid USAREC G2 analysts involved in
Dąbrowski, Wojciech; Żyłka, Radosław; Malinowski, Paweł
2017-02-01
The subject of the research conducted in an operating dairy wastewater treatment plant (WWTP) was to examine electric energy consumption during sewage sludge treatment. The excess sewage sludge was aerobically stabilized and dewatered with a screw press. Organic matter varied from 48% to 56% in sludge after stabilization and dewatering. It proves that sludge was properly stabilized and it was possible to apply it as a fertilizer. Measurement factors for electric energy consumption for mechanically dewatered sewage sludge were determined, which ranged between 0.94 and 1.5 kWhm -3 with the average value at 1.17 kWhm -3 . The shares of devices used for sludge dewatering and aerobic stabilization in the total energy consumption of the plant were also established, which were 3% and 25% respectively. A model of energy consumption during sewage sludge treatment was estimated according to experimental data. Two models were applied: linear regression for dewatering process and segmented linear regression for aerobic stabilization. The segmented linear regression model was also applied to total energy consumption during sewage sludge treatment in the examined dairy WWTP. The research constitutes an introduction for further studies on defining a mathematical model used to optimize electric energy consumption by dairy WWTPs. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hewer, Micah J.; Gough, William A.
2016-11-01
Based on a case study of the Toronto Zoo (Canada), multivariate regression analysis, involving both climatic and social variables, was employed to assess the relationship between daily weather and visitation. Zoo visitation was most sensitive to weather variability during the shoulder season, followed by the off-season and, then, the peak season. Temperature was the most influential weather variable in relation to zoo visitation, followed by precipitation and, then, wind speed. The intensity and direction of the social and climatic variables varied between seasons. Temperatures exceeding 26 °C during the shoulder season and 28 °C during the peak season suggested a behavioural threshold associated with zoo visitation, with conditions becoming too warm for certain segments of the zoo visitor market, causing visitor numbers to decline. Even light amounts of precipitation caused average visitor numbers to decline by nearly 50 %. Increasing wind speeds also demonstrated a negative influence on zoo visitation.
Fully automatic cervical vertebrae segmentation framework for X-ray images.
Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg
2018-04-01
The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.
Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A
2011-04-01
Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.
A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times
NASA Astrophysics Data System (ADS)
Edmund, Jens M.; Kjer, Hans M.; Van Leemput, Koen; Hansen, Rasmus H.; Andersen, Jon AL; Andreasen, Daniel
2014-12-01
Radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, so-called MRI-only RT, would remove the systematic registration error between MR and computed tomography (CT), and provide co-registered MRI for assessment of treatment response and adaptive RT. Electron densities, however, need to be assigned to the MRI images for dose calculation and patient setup based on digitally reconstructed radiographs (DRRs). Here, we investigate the geometric and dosimetric performance for a number of popular voxel-based methods to generate a so-called pseudo CT (pCT). Five patients receiving cranial irradiation, each containing a co-registered MRI and CT scan, were included. An ultra short echo time MRI sequence for bone visualization was used. Six methods were investigated for three popular types of voxel-based approaches; (1) threshold-based segmentation, (2) Bayesian segmentation and (3) statistical regression. Each approach contained two methods. Approach 1 used bulk density assignment of MRI voxels into air, soft tissue and bone based on logical masks and the transverse relaxation time T2 of the bone. Approach 2 used similar bulk density assignments with Bayesian statistics including or excluding additional spatial information. Approach 3 used a statistical regression correlating MRI voxels with their corresponding CT voxels. A similar photon and proton treatment plan was generated for a target positioned between the nasal cavity and the brainstem for all patients. The CT agreement with the pCT of each method was quantified and compared with the other methods geometrically and dosimetrically using both a number of reported metrics and introducing some novel metrics. The best geometrical agreement with CT was obtained with the statistical regression methods which performed significantly better than the threshold and Bayesian segmentation methods (excluding spatial information). All methods agreed significantly better with CT than a reference water MRI comparison. The mean dosimetric deviation for photons and protons compared to the CT was about 2% and highest in the gradient dose region of the brainstem. Both the threshold based method and the statistical regression methods showed the highest dosimetrical agreement. Generation of pCTs using statistical regression seems to be the most promising candidate for MRI-only RT of the brain. Further, the total amount of different tissues needs to be taken into account for dosimetric considerations regardless of their correct geometrical position.
Skoff, Tami H; Martin, Stacey W
2016-05-01
There is accumulating literature on waning acellular pertussis vaccine-induced immunity, confirming the results of studies assessing the duration of protection of pertussis vaccines. To evaluate the tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap) vaccine's effect over time among those 11 to 18 years old, while accounting for the transition from whole-cell to acellular pertussis vaccines for the childhood primary series. Extended, retrospective analysis of reported pertussis cases between January 1, 1990, and December 31, 2014, in the United States. The analysis included all nationally reported pertussis cases. US Tdap vaccination program and the transition from whole-cell to acellular pertussis vaccines. Rate ratios of reported pertussis incidence (defined as incidence among 11- to 18-year-old individuals divided by the combined incidence in all other age groups) modeled with segmented regression analysis and age-specific trends in reported pertussis incidence over time. Between 1990 and 2014, 356 557 pertussis cases were reported in the United States. Of those, 191 914 (53.8%) were female and 240 665 (67.5%) were white. Overall incidence increased from 1.7 in 100 000 to 4.0 in 100 000 between 1990 and 2003, while latter years were dominated by epidemic peaks. Incidence was highest among infants younger than 1 year throughout the analysis period. Pertussis rates were comparable among all other age groups until the late 2000s, when an increased burden of pertussis emerged among children 1 to 10 years old, resulting in the second highest age-specific incidence. By 2014, 11- to 18-year-old individuals once again had the second highest incidence. While slope coefficients from segmented regression analysis showed a positive impact of Tdap immediately following introduction (slope, -0.4959; P < .001), a reversal in trends was observed in 2010 when rates of disease among 11- to 18-year-old individuals increased at a faster rate than all other age groups combined (slope, 0.5727; P < .001). While the impact of Tdap among adolescents looked promising following vaccine introduction, our extended analysis found that trends in adolescent disease were abruptly reversed in 2010, corresponding directly to the aging of acellular pertussis-vaccinated cohorts. Despite the apparent limitations of Tdap, it remains the best prevention against disease in adolescents.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.
2013-12-15
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less
Carranco, Núria; Farrés-Cebrián, Mireia; Saurina, Javier
2018-01-01
High performance liquid chromatography method with ultra-violet detection (HPLC-UV) fingerprinting was applied for the analysis and characterization of olive oils, and was performed using a Zorbax Eclipse XDB-C8 reversed-phase column under gradient elution, employing 0.1% formic acid aqueous solution and methanol as mobile phase. More than 130 edible oils, including monovarietal extra-virgin olive oils (EVOOs) and other vegetable oils, were analyzed. Principal component analysis results showed a noticeable discrimination between olive oils and other vegetable oils using raw HPLC-UV chromatographic profiles as data descriptors. However, selected HPLC-UV chromatographic time-window segments were necessary to achieve discrimination among monovarietal EVOOs. Partial least square (PLS) regression was employed to tackle olive oil authentication of Arbequina EVOO adulterated with Picual EVOO, a refined olive oil, and sunflower oil. Highly satisfactory results were obtained after PLS analysis, with overall errors in the quantitation of adulteration in the Arbequina EVOO (minimum 2.5% adulterant) below 2.9%. PMID:29561820
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino
2018-01-01
Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Colony image acquisition and genetic segmentation algorithm and colony analyses
NASA Astrophysics Data System (ADS)
Wang, W. X.
2012-01-01
Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.
Johnson, Eileanoir B.; Gregory, Sarah; Johnson, Hans J.; Durr, Alexandra; Leavitt, Blair R.; Roos, Raymund A.; Rees, Geraint; Tabrizi, Sarah J.; Scahill, Rachael I.
2017-01-01
The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington’s disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software. PMID:29066997
Johnson, Eileanoir B; Gregory, Sarah; Johnson, Hans J; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund A; Rees, Geraint; Tabrizi, Sarah J; Scahill, Rachael I
2017-01-01
The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington's disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software.
NASA Astrophysics Data System (ADS)
Heydarian, Mohammadreza; Kirby, Miranda; Wheatley, Andrew; Fenster, Aaron; Parraga, Grace
2012-03-01
A semi-automated method for generating hyperpolarized helium-3 (3He) measurements of individual slice (2D) or whole lung (3D) gas distribution was developed. 3He MRI functional images were segmented using two-dimensional (2D) and three-dimensional (3D) hierarchical K-means clustering of the 3He MRI signal and in addition a seeded region-growing algorithm was employed for segmentation of the 1H MRI thoracic cavity volume. 3He MRI pulmonary function measurements were generated following two-dimensional landmark-based non-rigid registration of the 3He and 1H pulmonary images. We applied this method to MRI of healthy subjects and subjects with chronic obstructive lung disease (COPD). The results of hierarchical K-means 2D and 3D segmentation were compared to an expert observer's manual segmentation results using linear regression, Pearson correlations and the Dice similarity coefficient. 2D hierarchical K-means segmentation of ventilation volume (VV) and ventilation defect volume (VDV) was strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.97, p<.0001) and mean Dice coefficients were greater than 92% for all subjects. 3D hierarchical K-means segmentation of VV and VDV was also strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.64, p<.0001) and the mean Dice coefficients were greater than 91% for all subjects. Both 2D and 3D semi-automated segmentation of 3He MRI gas distribution provides a way to generate novel pulmonary function measurements.
Microstructural Organization of Elastomeric Polyurethanes with Siloxane-Containing Soft Segments
NASA Astrophysics Data System (ADS)
Choi, Taeyi; Weklser, Jadwiga; Padsalgikar, Ajay; Runt, James
2011-03-01
In the present study, we investigate the microstructure of two series of segmented polyurethanes (PUs) containing siloxane-based soft segments and the same hard segments, the latter synthesized from diphenylmethane diisocyanate and butanediol. The first series is synthesized using a hydroxy-terminated polydimethylsiloxane macrodiol and varying hard segment contents. The second series are derived from an oligomeric diol containing both siloxane and aliphatic carbonate species. Hard domain morphologies were characterized using tapping mode atomic force microscopy and quantitative analysis of hard/soft segment demixing was conducted using small-angle X-ray scattering. The phase transitions of all materials were investigated using DSC and dynamic mechanical analysis, and hydrogen bonding by FTIR spectroscopy.
NASA Astrophysics Data System (ADS)
Suzuki, H.; Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, Masahiro; Moriyama, N.
2014-03-01
Chronic obstructive pulmonary disease is a major public health problem that is predicted to be third leading cause of death in 2030. Although spirometry is traditionally used to quantify emphysema progression, it is difficult to detect the loss of pulmonary function by emphysema in early stage, and to assess the susceptibility to smoking. This study presents quantification method of smoking-induced emphysema progression based on annual changes of low attenuation volume (LAV) by each lung lobe acquired from low-dose CT images in lung cancer screening. The method consists of three steps. First, lung lobes are segmented using extracted interlobar fissures by enhancement filter based on fourdimensional curvature. Second, LAV of each lung lobe is segmented. Finally, smoking-induced emphysema progression is assessed by statistical analysis of the annual changes represented by linear regression of LAV percentage in each lung lobe. This method was applied to 140 participants in lung cancer CT screening for six years. The results showed that LAV progressions of nonsmokers, past smokers, and current smokers are different in terms of pack-year and smoking cessation duration. This study demonstrates effectiveness in diagnosis and prognosis of early emphysema in lung cancer CT screening.
Inner and outer segment junction (IS/OS line) integrity in ocular Behçet's disease.
Yüksel, Harun; Türkcü, Fatih M; Sahin, Muhammed; Cinar, Yasin; Cingü, Abdullah K; Ozkurt, Zeynep; Sahin, Alparslan; Ari, Seyhmus; Caça, Ihsan
2014-08-01
In this study, we examined the spectral domain optical coherence tomography (OCT) findings of ocular Behçet's disease (OB) in patients with inactive uveitis. Specifically, we analyzed the inner and outer segment junction (IS/OS line) integrity and the effect of disturbed IS/OS line integrity on visual acuity. Patient files and OCT images of OB patients who had been followed-up between January and June of the year 2013 at the Dicle University Eye Clinic were evaluated retrospectively. Sixty-six eyes of 39 patients were included the study. OCT examination of the patients with inactive OB revealed that approximately 25% of the patients had disturbed IS/OS and external limiting membrane (EML) line integrity, lower visual acuity (VA), and lower macular thickness than others. Linear regression analysis revealed that macular thickness was not an independent variable for VA. In contrast, the IS/OS line integrity was an independent variable for VA in inactive OB patients. In this study, we showed that the IS/OS line integrity was an independent variable for VA in inactive OB patients. Further prospective studies are needed to evaluate the integrity of the IS/OS line in OB patients.
Guerra, Jorge; Uddin, Jasim; Nilsen, Dawn; Mclnerney, James; Fadoo, Ammarah; Omofuma, Isirame B.; Hughes, Shatif; Agrawal, Sunil; Allen, Peter; Schambra, Heidi M.
2017-01-01
There currently exist no practical tools to identify functional movements in the upper extremities (UEs). This absence has limited the precise therapeutic dosing of patients recovering from stroke. In this proof-of-principle study, we aimed to develop an accurate approach for classifying UE functional movement primitives, which comprise functional movements. Data were generated from inertial measurement units (IMUs) placed on upper body segments of older healthy individuals and chronic stroke patients. Subjects performed activities commonly trained during rehabilitation after stroke. Data processing involved the use of a sliding window to obtain statistical descriptors, and resulting features were processed by a Hidden Markov Model (HMM). The likelihoods of the states, resulting from the HMM, were segmented by a second sliding window and their averages were calculated. The final predictions were mapped to human functional movement primitives using a Logistic Regression algorithm. Algorithm performance was assessed with a leave-one-out analysis, which determined its sensitivity, specificity, and positive and negative predictive values for all classified primitives. In healthy control and stroke participants, our approach identified functional movement primitives embedded in training activities with, on average, 80% precision. This approach may support functional movement dosing in stroke rehabilitation. PMID:28813877
Space shuttle propulsion parameter estimation using optional estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
Weichenthal, Scott; Van Ryswyk, Keith; Goldstein, Alon; Shekarrizfard, Maryam; Hatzopoulou, Marianne
2016-01-01
Exposure models are needed to evaluate the chronic health effects of ambient ultrafine particles (<0.1 μm) (UFPs). We developed a land use regression model for ambient UFPs in Toronto, Canada using mobile monitoring data collected during summer/winter 2010-2011. In total, 405 road segments were included in the analysis. The final model explained 67% of the spatial variation in mean UFPs and included terms for the logarithm of distances to highways, major roads, the central business district, Pearson airport, and bus routes as well as variables for the number of on-street trees, parks, open space, and the length of bus routes within a 100 m buffer. There was no systematic difference between measured and predicted values when the model was evaluated in an external dataset, although the R(2) value decreased (R(2) = 50%). This model will be used to evaluate the chronic health effects of UFPs using population-based cohorts in the Toronto area. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Sinclair, Jonathan; Fewtrell, David; Taylor, Paul John; Bottoms, Lindsay; Atkins, Stephen; Hobbs, Sarah Jane
2014-01-01
Achieving a high ball velocity is important during soccer shooting, as it gives the goalkeeper less time to react, thus improving a player's chance of scoring. This study aimed to identify important technical aspects of kicking linked to the generation of ball velocity using regression analyses. Maximal instep kicks were obtained from 22 academy-level soccer players using a 10-camera motion capture system sampling at 500 Hz. Three-dimensional kinematics of the lower extremity segments were obtained. Regression analysis was used to identify the kinematic parameters associated with the development of ball velocity. A single biomechanical parameter; knee extension velocity of the kicking limb at ball contact Adjusted R(2) = 0.39, p ≤ 0.01 was obtained as a significant predictor of ball-velocity. This study suggests that sagittal plane knee extension velocity is the strongest contributor to ball velocity and potentially overall kicking performance. It is conceivable therefore that players may benefit from exposure to coaching and strength techniques geared towards the improvement of knee extension angular velocity as highlighted in this study.
Mostafa, Kamal S M
2011-04-01
Malnutrition among under-five children is a chronic problem in developing countries. This study explores the socio-economic determinants of severe and moderate stunting among under-five children of rural Bangladesh. The study used data from the 2007 Bangladesh Demographic and Health Survey. Cross-sectional and multinomial logistic regression analyses were used to assess the effect of the socio-demographic variables on moderate and severe stunting over normal among the children. Findings revealed that over two-fifths of the children were stunted, of which 26.3% were moderately stunted and 15.1% were severely stunted. The multivariate multinomial logistic regression analysis yielded significantly increased risk of severe stunting (OR=2.53, 95% CI=1.34-4.79) and moderate stunting (OR=2.37, 95% CI=1.47-3.83) over normal among children with a thinner mother. Region, father's education, toilet facilities, child's age, birth order of children and wealth index were also important determinants of children's nutritional status. Development and poverty alleviation programmes should focus on the disadvantaged rural segments of people to improve their nutritional status.
NASA Astrophysics Data System (ADS)
Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.
2014-12-01
Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.
Race and Unemployment: Labor Market Experiences of Black and White Men, 1968-1988.
ERIC Educational Resources Information Center
Wilson, Franklin D.; And Others
1995-01-01
Estimation of multinomial logistic regression models on a sample of unemployed workers suggested that persistently higher black unemployment is due to differential access to employment opportunities by region, occupational placement, labor market segmentation, and discrimination. The racial gap in unemployment is greatest for college-educated…
ERIC Educational Resources Information Center
Kelly, Ronald R.; Gaustad, Martha G.
2007-01-01
This study of deaf college students examined specific relationships between their mathematics performance and their assessed skills in reading, language, and English morphology. Simple regression analyses showed that deaf college students' language proficiency scores, reading grade level, and morphological knowledge regarding word segmentation and…
Fish habitat regression under water scarcity scenarios in the Douro River basin
NASA Astrophysics Data System (ADS)
Segurado, Pedro; Jauch, Eduardo; Neves, Ramiro; Ferreira, Teresa
2015-04-01
Climate change will predictably alter hydrological patterns and processes at the catchment scale, with impacts on habitat conditions for fish. The main goals of this study are to identify the stream reaches that will undergo more pronounced flow reduction under different climate change scenarios and to assess which fish species will be more affected by the consequent regression of suitable habitats. The interplay between changes in flow and temperature and the presence of transversal artificial obstacles (dams and weirs) is analysed. The results will contribute to river management and impact mitigation actions under climate change. This study was carried out in the Tâmega catchment of the Douro basin. A set of 29 Hydrological, climatic, and hydrogeomorphological variables were modelled using a water modelling system (MOHID), based on meteorological data recorded monthly between 2008 and 2014. The same variables were modelled considering future climate change scenarios. The resulting variables were used in empirical habitat models of a set of key species (brown trout Salmo trutta fario, barbell Barbus bocagei, and nase Pseudochondrostoma duriense) using boosted regression trees. The stream segments between tributaries were used as spatial sampling units. Models were developed for the whole Douro basin using 401 fish sampling sites, although the modelled probabilities of species occurrence for each stream segment were predicted only for the Tâmega catchment. These probabilities of occurrence were used to classify stream segments into suitable and unsuitable habitat for each fish species, considering the future climate change scenario. The stream reaches that were predicted to undergo longer flow interruptions were identified and crossed with the resulting predictive maps of habitat suitability to compute the total area of habitat loss per species. Among the target species, the brown trout was predicted to be the most sensitive to habitat regression due to the interplay of flow reduction, increase of temperature and transversal barriers. This species is therefore a good indicator of climate change impacts in rivers and therefore we recommend using this species as a target of monitoring programs to be implemented in the context of climate change adaptation strategies.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Axonal transports of tripeptidyl peptidase II in rat sciatic nerves.
Chikuma, Toshiyuki; Shimizu, Maki; Tsuchiya, Yukihiro; Kato, Takeshi; Hojo, Hiroshi
2007-01-01
Axonal transport of tripeptidyl peptidase II, a putative cholecystokinin inactivating serine peptidase, was examined in the proximal, middle, and distal segments of rat sciatic nerves using a double ligation technique. Enzyme activity significantly increased not only in the proximal segment but also in the distal segment 12-72h after ligation, and the maximal enzyme activity was found in the proximal and distal segments at 72h. Western blot analysis of tripeptidyl peptidase II showed that its immunoreactivities in the proximal and distal segments were 3.1- and 1.7-fold higher than that in the middle segment. The immunohistochemical analysis of the segments also showed an increase in immunoreactive tripeptidyl peptidase II level in the proximal and distal segments in comparison with that in the middle segment, indicating that tripeptidyl peptidase II is transported by anterograde and retrograde axonal flow. The results suggest that tripeptidyl peptidase II may be involved in the metabolism of neuropeptides in nerve terminals or synaptic clefts.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
De la Garza-Ramos, Rafael; Nakhla, Jonathan; Gelfand, Yaroslav; Echt, Murray; Scoco, Aleka N; Kinon, Merritt D; Yassari, Reza
2018-03-01
To identify predictive factors for critical care unit-level complications (CCU complication) after long-segment fusion procedures for adult spinal deformity (ASD). The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database [2010-2014] was reviewed. Only adult patients who underwent fusion of 7 or more spinal levels for ASD were included. CCU complications included intraoperative arrest/infarction, ventilation >48 hours, pulmonary embolism, renal failure requiring dialysis, cardiac arrest, myocardial infarction, unplanned intubation, septic shock, stroke, coma, or new neurological deficit. A stepwise multivariate regression was used to identify independent predictors of CCU complications. Among 826 patients, the rate of CCU complications was 6.4%. On multivariate regression analysis, dependent functional status (P=0.004), combined approach (P=0.023), age (P=0.044), diabetes (P=0.048), and surgery for over 8 hours (P=0.080) were significantly associated with complication development. A simple scoring system was developed to predict complications with 0 points for patients aged <50, 1 point for patients between 50-70, 2 points for patients 70 or over, 1 point for diabetes, 2 points dependent functional status, 1 point for combined approach, and 1 point for surgery over 8 hours. The rate of CCU complications was 0.7%, 3.2%, 9.0%, and 12.6% for patients with 0, 1, 2, and 3+ points, respectively (P<0.001). The findings in this study suggest that older patients, patients with diabetes, patients who depend on others for activities of daily living, and patients who undergo combined approaches or surgery for over 8 hours may be at a significantly increased risk of developing a CCU-level complication after ASD surgery.
Funk, Sebastian; Bogich, Tiffany L; Jones, Kate E; Kilpatrick, A Marm; Daszak, Peter
2013-01-01
The proper allocation of public health resources for research and control requires quantification of both a disease's current burden and the trend in its impact. Infectious diseases that have been labeled as "emerging infectious diseases" (EIDs) have received heightened scientific and public attention and resources. However, the label 'emerging' is rarely backed by quantitative analysis and is often used subjectively. This can lead to over-allocation of resources to diseases that are incorrectly labelled "emerging," and insufficient allocation of resources to diseases for which evidence of an increasing or high sustained impact is strong. We suggest a simple quantitative approach, segmented regression, to characterize the trends and emergence of diseases. Segmented regression identifies one or more trends in a time series and determines the most statistically parsimonious split(s) (or joinpoints) in the time series. These joinpoints in the time series indicate time points when a change in trend occurred and may identify periods in which drivers of disease impact change. We illustrate the method by analyzing temporal patterns in incidence data for twelve diseases. This approach provides a way to classify a disease as currently emerging, re-emerging, receding, or stable based on temporal trends, as well as to pinpoint the time when the change in these trends happened. We argue that quantitative approaches to defining emergence based on the trend in impact of a disease can, with appropriate context, be used to prioritize resources for research and control. Implementing this more rigorous definition of an EID will require buy-in and enforcement from scientists, policy makers, peer reviewers and journal editors, but has the potential to improve resource allocation for global health.
Berlin, Claudia; Jüni, Peter; Endrich, Olga; Zwahlen, Marcel
2016-01-01
Cardiovascular diseases are the leading cause of death worldwide and in Switzerland. When applied, treatment guidelines for patients with acute ST-segment elevation myocardial infarction (STEMI) improve the clinical outcome and should eliminate treatment differences by sex and age for patients whose clinical situations are identical. In Switzerland, the rate at which STEMI patients receive revascularization may vary by patient and hospital characteristics. To examine all hospitalizations in Switzerland from 2010-2011 to determine if patient or hospital characteristics affected the rate of revascularization (receiving either a percutaneous coronary intervention or a coronary artery bypass grafting) in acute STEMI patients. We used national data sets on hospital stays, and on hospital infrastructure and operating characteristics, for the years 2010 and 2011, to identify all emergency patients admitted with the main diagnosis of acute STEMI. We then calculated the proportion of patients who were treated with revascularization. We used multivariable multilevel Poisson regression to determine if receipt of revascularization varied by patient and hospital characteristics. Of the 9,696 cases we identified, 71.6% received revascularization. Patients were less likely to receive revascularization if they were female, and 80 years or older. In the multivariable multilevel Poisson regression analysis, there was a trend for small-volume hospitals performing fewer revascularizations but this was not statistically significant while being female (Relative Proportion = 0.91, 95% CI: 0.86 to 0.97) and being older than 80 years was still associated with less frequent revascularization. Female and older patients were less likely to receive revascularization. Further research needs to clarify whether this reflects differential application of treatment guidelines or limitations in this kind of routine data.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Economic Analysis. Volume V. Course Segments 65-79.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
An interactive method based on the live wire for segmentation of the breast in mammography images.
Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu
2014-01-01
In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.
Strong, Laurence L.
2012-01-01
A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.
Strong, Laurence L.
2012-01-01
A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.
Employee choice of a high-deductible health plan across multiple employers.
Lave, Judith R; Men, Aiju; Day, Brian T; Wang, Wei; Zhang, Yuting
2011-02-01
To determine factors associated with selecting a high-deductible health plan (HDHP) rather than a preferred provider plan (PPO) and to examine switching and market segmentation after initial selection. Claims and benefit information for 2005-2007 from nine employers in western Pennsylvania first offering HDHP in 2006. We examined plan growth over time, used logistic regression to determine factors associated with choosing an HDHP, and examined the distribution of healthy and sick members across plan types. We linked employees with their dependents to determine family-level variables. We extracted risk scores, covered charges, employee age, and employee gender from claims data. We determined census-level race, education, and income information. Health status, gender, race, and education influenced the type of individual and family policies chosen. In the second year the HDHP was offered, few employees changed plans. Risk segmentation between HDHPs and PPOs existed, but it did not increase. When given a choice, those who are healthier are more likely to select an HDHP leading to risk segmentation. Risk segmentation did not increase in the second year that HDHPs were offered. © Health Research and Educational Trust.
Drawing the line between constituent structure and coherence relations in visual narratives
Cohn, Neil; Bender, Patrick
2016-01-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of Visual Narrative Grammar posits that hierarchic “grammatical” structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a “segmentation task” where participants drew lines between images in order to divide them into sub-episodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants’ divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. PMID:27709982
Drawing the line between constituent structure and coherence relations in visual narratives.
Cohn, Neil; Bender, Patrick
2017-02-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690
Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.
Segmentation of radiographic images under topological constraints: application to the femur.
Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang
2010-09-01
A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Stature estimation from the lengths of the growing foot-a study on North Indian adolescents.
Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam; DiMaggio, John A
2012-12-01
Stature estimation is considered as one of the basic parameters of the investigation process in unknown and commingled human remains in medico-legal case work. Race, age and sex are the other parameters which help in this process. Stature estimation is of the utmost importance as it completes the biological profile of a person along with the other three parameters of identification. The present research is intended to formulate standards for stature estimation from foot dimensions in adolescent males from North India and study the pattern of foot growth during the growing years. 154 male adolescents from the Northern part of India were included in the study. Besides stature, five anthropometric measurements that included the length of the foot from each toe (T1, T2, T3, T4, and T5 respectively) to pternion were measured on each foot. The data was analyzed statistically using Student's t-test, Pearson's correlation, linear and multiple regression analysis for estimation of stature and growth of foot during ages 13-18 years. Correlation coefficients between stature and all the foot measurements were found to be highly significant and positively correlated. Linear regression models and multiple regression models (with age as a co-variable) were derived for estimation of stature from the different measurements of the foot. Multiple regression models (with age as a co-variable) estimate stature with greater accuracy than the regression models for 13-18 years age group. The study shows the growth pattern of feet in North Indian adolescents and indicates that anthropometric measurements of the foot and its segments are valuable in estimation of stature in growing individuals of that population. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Febriani, F.; Handayani, L.; Setyani, A.; Anggono, T.; Syuhada; Soedjatmiko, B.
2018-03-01
The dimensionality and regional strike analyses of the Cimandiri Fault, West Java, Indonesia have been investigated. The Cimandiri Fault consists of six segments. They are Loji, Cidadap, Nyalindung, Cibeber, Saguling and Padalarang segments. The magnetotelluric (MT) investigation was done in the Cibeber segment. There were 42 observation points of the magnetotelluric data, which were distributed along 2 lines. The magnetotelluric phase tensor has been applied to determine the dimensionality and regional strike of the Cibeber segment, Cimandiri Fault, West Java. The result of the dimensionality analysis shows that the range values of the skew angle value which indicate the dimensionality of the study area are -5 ≤ β ≥ 5. These values indicate if we would like to generate the subsurface model of the Cibeber segment by using the magnetotelluric data, it is safe to assume that the Cibeber segment has the 2-D. While the regional strike analysis presents that the regional strike of the Cibeber segment is about N70-80°E.
Assignment of simian rotavirus SA11 temperature-sensitive mutant groups B and E to genome segments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombold, J.L.; Estes, M.K.; Ramig, R.F.
1985-05-01
Recombinant (reassortant) viruses were selected from crosses between temperature-sensitive (ts) mutants of simian rotavirus SA11 and wild-type human rotavirus Wa. The double-stranded genome RNAs of the reassortants were examined by electrophoresis in Tris-glycine-buffered polyacrylamide gels and by dot hybridization with a cloned DNA probe for genome segment 2. Analysis of replacements of genome segments in the reassortants allowed construction of a map correlating genome segments providing functions interchangeable between SA11 and Wa. The reassortants revealed a functional correspondence in order of increasing electrophoretic mobility of genome segments. Analysis of the parental origin of genome segments in ts+ SA11/Wa reassortants derivedmore » from the crosses SA11 tsB(339) X Wa and SA11 tsE(1400) X Wa revealed that the group B lesion of tsB(339) was located on genome segment 3 and the group E lesion of tsE(1400) was on segment 8.« less
Identification of uncommon objects in containers
Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.
2017-09-12
A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
Dreams Fulfilled, Dreams Shattered: Determinants of Segmented Assimilation in the Second Generation
ERIC Educational Resources Information Center
Haller, William; Portes, Alejandro; Lynch, Scott M.
2011-01-01
We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early…
Sharma, Shilpa; Mehta, Puja K; Arsanjani, Reza; Sedlak, Tara; Hobel, Zachary; Shufelt, Chrisandra; Jones, Erika; Kligfield, Paul; Mortara, David; Laks, Michael; Diniz, Marcio; Bairey Merz, C Noel
2018-06-19
The utility of exercise-induced ST-segment depression for diagnosing ischemic heart disease (IHD) in women is unclear. Based on evidence that IHD pathophysiology in women involves coronary vascular dysfunction, we hypothesized that coronary vascular dysfunction contributes to exercise electrocardiography (Ex-ECG) ST-depression in the absence of obstructive CAD, so-called "false positive" results. We tested our hypothesis in a pilot study evaluating the relationship between peripheral vascular endothelial function and Ex-ECG. Twenty-nine asymptomatic women without cardiac risk factors underwent maximal Bruce protocol exercise treadmill testing and peripheral endothelial function assessment using peripheral arterial tonometry (Itamar EndoPAT 2000) to measure reactive hyperemia index (RHI). The relationship between RHI and Ex-ECG ST-segment depression was evaluated using logistic regression and differences in subgroups using two-tailed t-tests. Mean age was 54 ± 7 years, body mass index 25 ± 4 kg/m 2 , and RHI 2.51 ± 0.66. Three women (10%) had RHI less than 1.68, consistent with abnormal peripheral endothelial function, while 18 women (62%) met criteria for a positive Ex-ECG based on ST-segment depression in contiguous leads. Women with and without ST-segment depression had similar baseline and exercise vital signs, metabolic equivalents (METS) achieved, and RHI (all p>0.05). RHI did not predict ST-segment depression. Our pilot study demonstrates a high prevalence of exercise-induced ST-segment depression in asymptomatic, middle-aged, overweight women. Peripheral vascular endothelial dysfunction did not predict Ex-ECG ST-segment depression. Further work is needed to investigate the utility of vascular endothelial testing and Ex-ECG for IHD diagnostic and management purposes in women. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lopez Castillo, Maria A; Carlson, Jordan A; Cain, Kelli L; Bonilla, Edith A; Chuang, Emmeline; Elder, John P; Sallis, James F
2015-01-01
The study aims were to determine: (a) how class structure varies by dance type, (b) how moderate-to-vigorous physical activity (MVPA) and sedentary behavior vary by dance class segments, and (c) how class structure relates to total MVPA in dance classes. Participants were 291 boys and girls ages 5 to 18 years old enrolled in 58 dance classes at 21 dance studios in Southern California. MVPA and sedentary behavior were assessed with accelerometry, with data aggregated to 15-s epochs. Percent and minutes of MVPA and sedentary behavior during dance class segments and percent of class time and minutes spent in each segment were calculated using Freedson age-specific cut points. Differences in MVPA (Freedson 3 Metabolic Equivalents of Tasks age-specific cut points) and sedentary behavior ( < 100 counts/min) were examined using mixed-effects linear regression. The length of each class segment was fairly consistent across dance types, with the exception that in ballet, more time was spent in technique as compared with private jazz/hip-hop classes and Latin-flamenco and less time was spent in routine/practice as compared with Latin-salsa/ballet folklorico. Segment type accounted for 17% of the variance in the proportion of the segment spent in MVPA. The proportion of the segment in MVPA was higher for routine/practice (44.2%) than for technique (34.7%). The proportion of the segment in sedentary behavior was lowest for routine/practice (22.8%). The structure of dance lessons can impact youths' physical activity. Working with instructors to increase time in routine/practice during dance classes could contribute to physical activity promotion in youth.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tooth segmentation system with intelligent editing for cephalometric analysis
NASA Astrophysics Data System (ADS)
Chen, Shoupu
2015-03-01
Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.
Sarcopenia is associated with an increased risk of advanced colorectal neoplasia.
Park, Youn Su; Kim, Ji Won; Kim, Byeong Gwan; Lee, Kook Lae; Lee, Jae Kyung; Kim, Joo Sung; Koh, Seong-Joon
2017-04-01
Although sarcopenia is associated with an increased risk for mortality after the curative resection of colorectal cancer, its influence on the development of advanced colonic neoplasia remains unclear. This study included 1270 subjects aged 40 years or older evaluated with first-time screening colonoscopy at Seoul National University Boramae Health Care Center from January 2010 to February 2015. Skeletal muscle mass was measured with a body composition analyzer (direct segmental multifrequency bioelectrical impedance analysis method). Multiple logistic regression analysis was performed to determine whether sarcopenia is associated with advanced colorectal neoplasia. Of 1270 subjects, 139 (10.9%) were categorized into the sarcopenia group and 1131 (89.1%) into the non-sarcopenia group. In the non-sarcopenia group, 55 subjects (4.9%) had advanced colorectal neoplasia. However, in the sarcopenia group, 19 subjects (13.7%) had advanced colorectal neoplasia, including 1 subject with invasive colorectal cancer (0.7%). In addition, subjects with sarcopenia had a higher prevalence of advanced adenoma (P < 0.001) than those without sarcopenia. According to the multiple logistic regression analysis adjusted for variable confounders, age (odds ratio 1.062, 95% confidence interval 1.032-1.093; P < 0.001), male sex (odds ratio 1.749, 95% confidence interval 1.008-3.036; P = 0.047), and sarcopenia (odds ratio 2.347, 95% confidence interval 1.311-4.202; P = 0.004) were associated with an advanced colorectal neoplasia. Sarcopenia is associated with an increased risk of advanced colorectal neoplasia.
A new approach to assess COPD by identifying lung function break-points
Eriksson, Göran; Jarenbäck, Linnea; Peterson, Stefan; Ankerst, Jaro; Bjermer, Leif; Tufvesson, Ellen
2015-01-01
Purpose COPD is a progressive disease, which can take different routes, leading to great heterogeneity. The aim of the post-hoc analysis reported here was to perform continuous analyses of advanced lung function measurements, using linear and nonlinear regressions. Patients and methods Fifty-one COPD patients with mild to very severe disease (Global Initiative for Chronic Obstructive Lung Disease [GOLD] Stages I–IV) and 41 healthy smokers were investigated post-bronchodilation by flow-volume spirometry, body plethysmography, diffusion capacity testing, and impulse oscillometry. The relationship between COPD severity, based on forced expiratory volume in 1 second (FEV1), and different lung function parameters was analyzed by flexible nonparametric method, linear regression, and segmented linear regression with break-points. Results Most lung function parameters were nonlinear in relation to spirometric severity. Parameters related to volume (residual volume, functional residual capacity, total lung capacity, diffusion capacity [diffusion capacity of the lung for carbon monoxide], diffusion capacity of the lung for carbon monoxide/alveolar volume) and reactance (reactance area and reactance at 5Hz) were segmented with break-points at 60%–70% of FEV1. FEV1/forced vital capacity (FVC) and resonance frequency had break-points around 80% of FEV1, while many resistance parameters had break-points below 40%. The slopes in percent predicted differed; resistance at 5 Hz minus resistance at 20 Hz had a linear slope change of −5.3 per unit FEV1, while residual volume had no slope change above and −3.3 change per unit FEV1 below its break-point of 61%. Conclusion Continuous analyses of different lung function parameters over the spirometric COPD severity range gave valuable information additional to categorical analyses. Parameters related to volume, diffusion capacity, and reactance showed break-points around 65% of FEV1, indicating that air trapping starts to dominate in moderate COPD (FEV1 =50%–80%). This may have an impact on the patient’s management plan and selection of patients and/or outcomes in clinical research. PMID:26508849
Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark
2012-01-01
The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden <3%). Regression-based 95% limits of agreement indicated substantial agreement for method interchangeability. Repeated measures revealed concordance correlation of >0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.
Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model
NASA Astrophysics Data System (ADS)
Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.
2018-04-01
It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T; Ding, H; Torabzadeh, M
2015-06-15
Purpose: To investigate the feasibility of quantifying the cross-sectional area (CSA) of coronary arteries using integrated density in a physics-based model with a phantom study. Methods: In this technique the total integrated density of the object as compared with its local background is measured so it is possible to account for the partial volume effect. The proposed method was compared to manual segmentation using CT scans of a 10 cm diameter Lucite cylinder placed inside a chest phantom. Holes with cross-sectional areas from 1.4 to 12.3 mm{sup 2} were drilled into the Lucite and filled with iodine solution, producing amore » contrast-to-noise ratio of approximately 26. Lucite rods 1.6 mm in diameter were used to simulate plaques. The phantom was imaged with and without the Lucite rods placed in the holes to simulate diseased and normal arteries, respectively. Linear regression analysis was used, and the root-mean-square deviations (RMSD) and errors (RMSE) were computed to assess the precision and accuracy of the measurements. In the case of manual segmentation, two readers independently delineated the lumen in order to quantify the inter-reader variability. Results: The precision and accuracy for the normal vessels using the integrated density technique were 0.32 mm{sup 2} and 0.32 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.51 mm{sup 2} and 0.56 mm{sup 2}. In the case of diseased vessels, the precision and accuracy of the integrated density technique were 0.46 mm{sup 2} and 0.55 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.75 mm{sup 2} and 0.98 mm{sup 2}. The mean percent difference for the two readers was found to be 8.4%. Conclusion: The CSA based on integrated density had improved precision and accuracy as compared with manual segmentation in a Lucite phantom. The results indicate the potential for using integrated density to improve CSA measurements in CT angiography.« less
Remo, Jonathan W.F.; Ickes, Brian; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.
2018-01-01
The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a ~1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.
NASA Astrophysics Data System (ADS)
Remo, Jonathan W. F.; Ickes, Brian S.; Ryherd, Julia K.; Guida, Ross J.; Therrell, Matthew D.
2018-07-01
The impacts of dams and levees on the long-term (>130 years) discharge record was assessed along a 1200 km segment of the Mississippi River between St. Louis, Missouri, and Vicksburg, Mississippi. To aid in our evaluation of dam impacts, we used data from the U.S. National Inventory of Dams to calculate the rate of reservoir expansion at five long-term hydrologic monitoring stations along the study segment. We divided the hydrologic record at each station into three periods: (1) a pre-rapid reservoir expansion period; (2) a rapid reservoir expansion period; and (3) a post-rapid reservoir expansion period. We then used three approaches to assess changes in the hydrologic record at each station. Indicators of hydrologic alteration (IHA) and flow duration hydrographs were used to quantify changes in flow conditions between the pre- and post-rapid reservoir expansion periods. Auto-regressive interrupted time series analysis (ARITS) was used to assess trends in maximum annual discharge, mean annual discharge, minimum annual discharge, and standard deviation of daily discharges within a given water year. A one-dimensional HEC-RAS hydraulic model was used to assess the impact of levees on flood flows. Our results revealed that minimum annual discharges and low-flow IHA parameters showed the most significant changes. Additionally, increasing trends in minimum annual discharge during the rapid reservoir expansion period were found at three out of the five hydrologic monitoring stations. These IHA and ARITS results support previous findings consistent with the observation that reservoirs generally have the greatest impacts on low-flow conditions. River segment scale hydraulic modeling revealed levees can modestly increase peak flood discharges, while basin-scale hydrologic modeling assessments by the U.S. Army Corps of Engineers showed that tributary reservoirs reduced peak discharges by a similar magnitude (2 to 30%). This finding suggests that the effects of dams and levees on peak flood discharges are in part offsetting one another along the modeled river segments and likely other substantially leveed segments of the Mississippi River.
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Segmental Analysis of Chlorprothixene and Desmethylchlorprothixene in Postmortem Hair.
Günther, Kamilla Nyborg; Johansen, Sys Stybe; Wicktor, Petra; Banner, Jytte; Linnet, Kristian
2018-06-26
Analysis of drugs in hair differs from their analysis in other tissues due to the extended detection window, as well as the opportunity that segmental hair analysis offers for the detection of changes in drug intake over time. The antipsychotic drug chlorprothixene is widely used, but few reports exist on chlorprothixene concentrations in hair. In this study, we analyzed hair segments from 20 deceased psychiatric patients who had undergone chronic chlorprothixene treatment, and we report hair concentrations of chlorprothixene and its metabolite desmethylchlorprothixene. Three to six 1-cm long segments were analyzed per individual, corresponding to ~3-6 months of hair growth before death, depending on the length of the hair. We used a previously published and fully validated liquid chromatography-tandem mass spectrometry method for the hair analysis. The 10th-90th percentiles of chlorprothixene and desmethylchlorprothixene concentrations in all hair segments were 0.05-0.84 ng/mg and 0.06-0.89 ng/mg, respectively, with medians of 0.21 and 0.24 ng/mg, and means of 0.38 and 0.43 ng/mg. The estimated daily dosages ranged from 28 mg/day to 417 mg/day. We found a significant positive correlation between the concentration in hair and the estimated daily doses for both chlorprothixene (P = 0.0016, slope = 0.0044 [ng/mg hair]/[mg/day]) and the metabolite desmethylchlorprothixene (P = 0.0074). Concentrations generally decreased throughout the hair shaft from proximal to distal segments, with an average reduction in concentration from segment 1 to segment 3 of 24% for all cases, indicating that most of the individuals had been compliant with their treatment. We have provided some guidance regarding reference levels for chlorprothixene and desmethylchlorprothixene concentrations in hair from patients undergoing long-term chlorprothixene treatment.
Infolding of fenestrated endovascular stent graft.
Zelt, Jason G E; Jetty, Prasad; Hadziomerovic, Adnan; Nagpal, Sudhir
2017-09-01
We report a case of infolding of a fenestrated stent graft involving the visceral vessel segment after a juxtarenal abdominal aorta aneurysm repair. The patient remains free of any significant endoleak, and the aortic sac has shown regression. The patient remains asymptomatic, with no abdominal pain, with normal renal function, and without ischemic limb complications. We hypothesize that significant graft oversizing (20%-30%) with asymmetric engineering of the diameter-reducing ties may have contributed to the infolding. Because of the patient's asymptomatic nature and general medical comorbidities, further intervention was deemed inappropriate as the aneurysmal sac is regressing despite the infolding.
2003-09-11
KENNEDY SPACE CENTER, FLA. - Seen from below and through a solid rocket booster segment mockup, Jeff Thon, an SRB mechanic with United Space Alliance, tests the feasibility of a vertical solid rocket booster propellant grain inspection technique. The inspection of segments is required as part of safety analysis.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.
ERIC Educational Resources Information Center
Lay, Robert S.; Maguire, John J.
1983-01-01
Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.
Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong
2014-08-01
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation
1992-01-30
A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation
Damman, Peter; Holmvang, Lene; Tijssen, Jan G P; Lagerqvist, Bo; Clayton, Tim C; Pocock, Stuart J; Windhausen, Fons; Hirsch, Alexander; Fox, Keith A A; Wallentin, Lars; de Winter, Robbert J
2012-01-01
The aim of this study was to evaluate the independent prognostic value of qualitative and quantitative admission electrocardiographic (ECG) analysis regarding long-term outcomes after non-ST-segment elevation acute coronary syndromes (NSTE-ACS). From the Fragmin and Fast Revascularization During Instability in Coronary Artery Disease (FRISC II), Invasive Versus Conservative Treatment in Unstable Coronary Syndromes (ICTUS), and Randomized Intervention Trial of Unstable Angina 3 (RITA-3) patient-pooled database, 5,420 patients with NSTE-ACS with qualitative ECG data, of whom 2,901 had quantitative data, were included in this analysis. The main outcome was 5-year cardiovascular death or myocardial infarction. Hazard ratios (HRs) were calculated with Cox regression models, and adjustments were made for established outcome predictors. The additional discriminative value was assessed with the category-less net reclassification improvement and integrated discrimination improvement indexes. In the 5,420 patients, the presence of ST-segment depression (≥1 mm; adjusted HR 1.43, 95% confidence interval [CI] 1.25 to 1.63) and left bundle branch block (adjusted HR 1.64, 95% CI 1.18 to 2.28) were independently associated with long-term cardiovascular death or myocardial infarction. Risk increases were short and long term. On quantitative ECG analysis, cumulative ST-segment depression (≥5 mm; adjusted HR 1.34, 95% CI 1.05 to 1.70), the presence of left bundle branch block (adjusted HR 2.15, 95% CI 1.36 to 3.40) or ≥6 leads with inverse T waves (adjusted HR 1.22, 95% CI 0.97 to 1.55) was independently associated with long-term outcomes. No interaction was observed with treatment strategy. No improvements in net reclassification improvement and integrated discrimination improvement were observed after the addition of quantitative characteristics to a model including qualitative characteristics. In conclusion, in the FRISC II, ICTUS, and RITA-3 NSTE-ACS patient-pooled data set, admission ECG characteristics provided long-term prognostic value for cardiovascular death or myocardial infarction. Quantitative ECG characteristics provided no incremental discrimination compared to qualitative data. Copyright © 2012 Elsevier Inc. All rights reserved.
Farooq, Muhammad; Sazonov, Edward
2017-11-01
Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.
Kinon, Merritt D; Nasser, Rani; Nakhla, Jonathan P; Adogwa, Owoicho; Moreno, Jessica R; Harowicz, Michael; Verla, Terence; Yassari, Reza; Bagley, Carlos A
2016-01-01
The surgical treatment of adult scoliosis frequently involves long segment fusions across the lumbosacral joints that redistribute tremendous amounts of force to the remaining mobile spinal segments as well as to the pelvis and hip joints. Whether or not these forces increase the risk of femoral bone pathology remains unknown. The aim of this study is to determine the correlation between long segment spinal fusions to the pelvis and the antecedent development of degenerative hip pathologies as well as what predictive patient characteristics, if any, correlate with their development. A retrospective chart review of all long segment fusions to the pelvis for adult degenerative deformity operated on by the senior author at the Duke Spine Center from February 2008 to March 2014 was undertaken. Enrolment criteria included all available demographic, surgical, and clinical outcome data as well as pre and postoperative hip pathology assessment. All patients had prospectively collected outcome measures and a minimum 2-year follow-up. Multivariable logistic regression analysis was performed comparing the incidence of preoperative hip pain and antecedent postoperative hip pain as a function of age, gender, body mass index (BMI), and number of spinal levels fused. In total, 194 patients were enrolled in this study. Of those, 116 patients (60%) reported no hip pain prior to surgery. Eighty-three patients (71.6%) remained hip pain free, whereas 33 patients (28.5%) developed new postoperative hip pain. Age, gender, and BMI were not significant among those who went on to develop hip pain postoperatively ( P < 0.0651, 0.3491, and 0.1021, respectively). Of the 78 patients with preoperative hip pain, 20 patients (25.6%) continued to have hip pain postoperatively, whereas 58 patients reported improvement in the hip pain after long segment fusion for correction of their deformity, a 74.4% rate of reduction. Age, gender, and BMI were not significant among those who continued to have hip pain postoperatively ( P < 0.4386, 0.4637, and 0.2545, respectively). Number of levels fused was not a significant factor in the development of hip pain in either patient population; patients without preoperative pain who developed pain postoperatively ( P < 0.1407) as well as patients with preoperative pain who continued to have pain postoperatively ( P < 0.0772). This study demonstrates that long segment lumbosacral fusions are not associated with an increase in postoperative hip pathology. Age, gender, BMI, and levels fused do not correlate with the development of postoperative hip pain. The restoration of spinal alignment with long segment fusions may actually decrease the risk of developing femoral bone pathology and have a protective effect on the hip.
Mechanical evaluation of anastomotic tension and patency in arteries.
Zhang, F; Lineaweaver, W C; Buntic, R; Walker, R
1996-02-01
This study quantified arterial anastomotic tension, evaluated subsequent patency rates, and examined the degree of tension reduction with vessel mobilization. The study was divided into two components. In part I, a mechanical analysis was undertaken to evaluate tension, based on the determination of the force required to deflect a cable (vessel) laterally, and its resulting lateral displacement. Six Sprague-Dawley rats with 12 femoral arteries were divided into two subgroups: 1) no mobilization; and 2) axial mobilization by ligation and transection of superficial epigastric and gracilis muscular branches. The tension of femoral arterial anastomoses was calculated in vessels with no segmental defect and with 1.5-, 3-, 4.5-, 6-, and 7.5-mm defects. In part II, patency was evaluated. Fifty-five rats with 110 femoral arteries were divided into two sub-groups as defined in part I: 1) no mobilization; and 2) axial mobilization by ligation and transection of superficial epigastric and gracilis muscular branches. Microvascular anastomoses were performed with no segmental defect and with 1-, 2-, 3-, 4-, 5-, 6-, 7-, 8-, 9-, and 10-mm segmental vessel defects. Patency was evaluated 24 hr postoperatively. Part I of the study revealed that anastomotic tension gradually increased along with an increase in the length of the vessel defect, from 1.9 to 11.34 g in the no-mobilization group and from 1.97 to 8.44 g in the axial-mobilization group. Comparison of tension linear regression coefficient showed a significant difference between the two groups (p < 0.05). In part II of the study, the maximum length of femoral artery defects still able to maintain 100 percent patency of anastomoses was 4 mm (tension approximately 6 g) in the no-mobilization group and 6 mm in the axial-mobilization group (tension approximately 6.48 g). Microanastomotic tension was related to the size of the vessel defect, with increasing tension leading to thrombosis. Axial mobilization significantly reduced the tension in vessels with segmental defects and decreased thrombosis rates.
Furushima, Hiroshi; Chinushi, Masaomi; Iijima, Kenichi; Hasegawa, Kanae; Sato, Akinori; Izumi, Daisuke; Watanabe, Hiroshi; Aizawa, Yoshifusa
2012-05-01
The aim of this study was to determine whether or not the coexistence of sustained ST-segment elevation and abnormal Q waves (STe-Q) could be a risk factor for electrical storm (ES) in implanted cardioverter defibrillator (ICD) patients with structural heart diseases. In all, 156 consecutive patients received ICD therapy for secondary prevention of sudden cardiac death and/or sustained ventricular tachyarrhythmias were included. Electrical storm was defined as ≥3 separate episodes of ventricular tachycardia (VT) and/or ventricular fibrillation (VF) terminated by ICD therapies within 24 h. During a mean follow-up of 1825 ± 1188 days, 42 (26.9%) patients experienced ES, of whom 12 had coronary artery disease, 15 had idiopathic dilated cardiomyopathy, 6 had hypertrophic cardiomyopathy, 4 had arrhythmogenic right ventricular cardiomyopathy, 4 had cardiac sarcoidosis, and 1 had valvular heart disease. Sustained ST-segment elevation and abnormal Q waves in ≥2 leads on the 12-lead electrocardiography was observed in 33 (21%) patients. On the Kaplan-Meier analysis, patients with STe-Q had a markedly higher risk of ES than those without STe-Q (P< 0.0001). The multivariate Cox proportional hazards regression model indicated that STe-Q and left ventricular ejection fraction (LVEF) (<30%) were independent risk factors associated with the recurrence of VT/VF (STe-Q: HR 1.962, 95% CI 1.24-3.12, P= 0.004; LVEF: HR 1.860, 95% CI 1.20-2.89, P= 0.006), and STe-Q was an independent risk factor associated with ES (HR 4.955, 95% CI 2.69-9.13, P< 0.0001). Sustained ST-segment elevation and abnormal Q waves could be a risk factor of not only recurrent VT/VF but also ES in patients with structural heart diseases.
NASA Astrophysics Data System (ADS)
Morrish, S.; Marshall, J. S.
2013-12-01
The Nicoya Peninsula lies within the Costa Rican forearc where the Cocos plate subducts under the Caribbean plate at ~8.5 cm/yr. Rapid plate convergence produces frequent large earthquakes (~50yr recurrence interval) and pronounced crustal deformation (0.1-2.0m/ky uplift). Seven uplifted segments have been identified in previous studies using broad geomorphic surfaces (Hare & Gardner 1984) and late Quaternary marine terraces (Marshall et al. 2010). These surfaces suggest long term net uplift and segmentation of the peninsula in response to contrasting domains of subducting seafloor (EPR, CNS-1, CNS-2). In this study, newer 10m contour digital topographic data (CENIGA- Terra Project) will be used to characterize and delineate this segmentation using morphotectonic analysis of drainage basins and correlation of fluvial terrace/ geomorphic surface elevations. The peninsula has six primary watersheds which drain into the Pacific Ocean; the Río Andamojo, Río Tabaco, Río Nosara, Río Ora, Río Bongo, and Río Ario which range in area from 200 km2 to 350 km2. The trunk rivers follow major lineaments that define morphotectonic segment boundaries and in turn their drainage basins are bisected by them. Morphometric analysis of the lower (1st and 2nd) order drainage basins will provide insight into segmented tectonic uplift and deformation by comparing values of drainage basin asymmetry, stream length gradient, and hypsometry with respect to margin segmentation and subducting seafloor domain. A general geomorphic analysis will be conducted alongside the morphometric analysis to map previously recognized (Morrish et al. 2010) but poorly characterized late Quaternary fluvial terraces. Stream capture and drainage divide migration are common processes throughout the peninsula in response to the ongoing deformation. Identification and characterization of basin piracy throughout the peninsula will provide insight into the history of landscape evolution in response to differential uplift. Conducting this morphotectonic analysis of the Nicoya Peninsula will provide further constraints on rates of segment uplift, location of segment boundaries, and advance the understanding of the long term deformation of the region in relation to subduction.
Kim, Soo-Yeon; Lee, Eunjung; Nam, Se Jin; Kim, Eun-Kyung; Moon, Hee Jung; Yoon, Jung Hyun; Han, Kyung Hwa; Kwak, Jin Young
2017-01-01
This retrospective study aimed to evaluate whether ultrasound texture analysis is useful to predict lymph node metastasis in patients with papillary thyroid microcarcinoma (PTMC). This study was approved by the Institutional Review Board, and the need to obtain informed consent was waived. Between May and July 2013, 361 patients (mean age, 43.8 ± 11.3 years; range, 16-72 years) who underwent staging ultrasound (US) and subsequent thyroidectomy for conventional PTMC ≤ 10 mm between May and July 2013 were included. Each PTMC was manually segmented and its histogram parameters (Mean, Standard deviation, Skewness, Kurtosis, and Entropy) were extracted with Matlab software. The mean values of histogram parameters and clinical and US features were compared according to lymph node metastasis using the independent t-test and Chi-square test. Multivariate logistic regression analysis was performed to identify the independent factors associated with lymph node metastasis. Tumors with lymph node metastasis (n = 117) had significantly higher entropy compared to those without lymph node metastasis (n = 244) (mean±standard deviation, 6.268±0.407 vs. 6.171±.0.405; P = .035). No additional histogram parameters showed differences in mean values according to lymph node metastasis. Entropy was not independently associated with lymph node metastasis on multivariate logistic regression analysis (Odds ratio, 0.977 [95% confidence interval (CI), 0.482-1.980]; P = .949). Younger age (Odds ratio, 0.962 [95% CI, 0.940-0.984]; P = .001) and lymph node metastasis on US (Odds ratio, 7.325 [95% CI, 3.573-15.020]; P < .001) were independently associated with lymph node metastasis. Texture analysis was not useful in predicting lymph node metastasis in patients with PTMC.
Blasco, Ana; Bellas, Carmen; Goicolea, Leyre; Muñiz, Ana; Abraira, Víctor; Royuela, Ana; Mingo, Susana; Oteo, Juan Francisco; García-Touchard, Arturo; Goicolea, Francisco Javier
2017-03-01
Thrombus aspiration allows analysis of intracoronary material in patients with ST-segment elevation myocardial infarction. Our objective was to characterize this material by immunohistology and to study its possible association with patient progress. This study analyzed a prospective cohort of 142 patients undergoing primary angioplasty with positive coronary aspiration. Histological examination of aspirated samples included immunohistochemistry stains for the detection of plaque fragments. The statistical analysis comprised histological variables (thrombus age, degree of inflammation, presence of plaque), the patients' clinical and angiographic features, estimation of survival curves, and logistic regression analysis. Among the histological markers, only the presence of plaque (63% of samples) was associated with postinfarction clinical events. Factors associated with 5-year event-free survival were the presence of plaque in the aspirate (82.2% vs 66.0%; P = .033), smoking (82.5% smokers vs 66.7% nonsmokers; P = .036), culprit coronary artery (83.3% circumflex or right coronary artery vs 68.5% anterior descending artery; P = .042), final angiographic flow (80.8% II-III vs 30.0% 0-I; P < .001) and left ventricular ejection fraction ≥ 35% at discharge (83.7% vs 26.7%; P < .001). On multivariable Cox regression analysis with these variables, independent predictors of event-free survival were the presence of plaque (hazard ratio, 0.37; 95%CI, 0.18-0.77; P = .008), and left ventricular ejection fraction (hazard ratio, 0.92; 95%CI, 0.88-0.95; P < .001). The presence of plaque in the coronary aspirate of patients with ST elevation myocardial infarction may be an independent prognostic marker. CD68 immunohistochemical stain is a good method for plaque detection. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Individual bone structure segmentation and labeling from low-dose chest CT
NASA Astrophysics Data System (ADS)
Liu, Shuang; Xie, Yiting; Reeves, Anthony P.
2017-03-01
The segmentation and labeling of the individual bones serve as the first step to the fully automated measurement of skeletal characteristics and the detection of abnormalities such as skeletal deformities, osteoporosis, and vertebral fractures. Moreover, the identified landmarks on the segmented bone structures can potentially provide relatively reliable location reference to other non-rigid human organs, such as breast, heart and lung, thereby facilitating the corresponding image analysis and registration. A fully automated anatomy-directed framework for the segmentation and labeling of the individual bone structures from low-dose chest CT is presented in this paper. The proposed system consists of four main stages: First, both clavicles are segmented and labeled by fitting a piecewise cylindrical envelope. Second, the sternum is segmented under the spatial constraints provided by the segmented clavicles. Third, all ribs are segmented and labeled based on 3D region growing within the volume of interest defined with reference to the spinal canal centerline and lungs. Fourth, the individual thoracic vertebrae are segmented and labeled by image intensity based analysis in the spatial region constrained by the previously segmented bone structures. The system performance was validated with 1270 lowdose chest CT scans through visual evaluation. Satisfactory performance was obtained respectively in 97.1% cases for the clavicle segmentation and labeling, in 97.3% cases for the sternum segmentation, in 97.2% cases for the rib segmentation, in 94.2% cases for the rib labeling, in 92.4% cases for vertebra segmentation and in 89.9% cases for the vertebra labeling.
Sex, price and preferences: accounting for unsafe sexual practices in prostitution markets.
Adriaenssens, Stef; Hendrickx, Jef
2012-06-01
Unsafe sexual practices are persistent in prostitution interactions: one in four contacts can be called unsafe. The determinants of this are still matter for debate. We account for the roles played by clients' preferences and the hypothetical price premium of unsafe sexual practices with the help of a large dataset of clients' self-reported commercial sexual transactions in Belgium and The Netherlands. Almost 25,000 reports were collected, representing the whole gamut of prostitution market segments. The first set of explanations consists of an analysis of the price-fixing elements of paid sex. With the help of the so-called hedonic pricing method we test for the existence of a price incentive for unsafe sex. In accordance with the results from studies in some prostitution markets in the developing world, the study replicates a significant wage penalty for condom use of an estimated 7.2 per cent, confirmed in both multilevel and fixed-effects regressions. The second part of the analysis reconstructs the demand side basis of this wage penalty: the consistent preference of clients of prostitution for unsafe sex. This study is the first to document empirically clients' preference for intercourse without a condom, with the help of a multilevel ordinal regression. © 2011 The Authors. Sociology of Health & Illness © 2011 Foundation for the Sociology of Health & Illness/Blackwell Publishing Ltd.
Ueda, Tetsuo; Ikeda, Hitoe; Ota, Takeo; Matsuura, Toyoaki; Hara, Yoshiaki
2010-05-01
To evaluate the relationship between cataract density and the deviation from the predicted refraction. Department of Ophthalmology, Nara Medical University, Kashihara, Japan. Axial length (AL) was measured in eyes with mainly nuclear cataract using partial coherence interferometry (IOLMaster). The postoperative AL was measured in pseudophakic mode. The AL difference was calculated by subtracting the postoperative AL from the preoperative AL. Cataract density was measured with the pupil dilated using anterior segment Scheimpflug imaging (EAS-1000). The predicted postoperative refraction was calculated using the SRK/T formula. The subjective refraction 3 months postoperatively was also measured. The mean absolute prediction error (MAE) (mean of absolute difference between predicted postoperative refraction and spherical equivalent of postoperative subjective refraction) was calculated. The relationship between the MAE and cataract density, age, preoperative visual acuity, anterior chamber depth, corneal radius of curvature, and AL difference was evaluated using multiple regression analysis. In the 96 eyes evaluated, the MAE was correlated with cataract density (r = 0.37, P = .001) and the AL difference (r = 0.34, P = .003) but not with the other parameters. The AL difference was correlated with cataract density (r = 0.53, P<.0001). The postoperative refractive outcome was affected by cataract density. This should be taken into consideration in eyes with a higher density cataract. (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R.
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases. PMID:22448233
Novel dehydrins lacking complete K-segments in Pinaceae. The exception rather than the rule
Perdiguero, Pedro; Collada, Carmen; Soto, Álvaro
2014-01-01
Dehydrins are thought to play an essential role in the plant response, acclimation and tolerance to different abiotic stresses, such as cold and drought. These proteins contain conserved and repeated segments in their amino acid sequence, used for their classification. Thus, dehydrins from angiosperms present different repetitions of the segments Y, S, and K, while gymnosperm dehydrins show A, E, S, and K segments. The only fragment present in all the dehydrins described to date is the K-segment. Different works suggest the K-segment is involved in key protective functions during dehydration stress, mainly stabilizing membranes. In this work, we describe for the first time two Pinus pinaster proteins with truncated K-segments and a third one completely lacking K-segments, but whose sequence homology leads us to consider them still as dehydrins. qRT-PCR expression analysis show a significant induction of these dehydrins during a severe and prolonged drought stress. By in silico analysis we confirmed the presence of these dehydrins in other Pinaceae species, breaking the convention regarding the compulsory presence of K-segments in these proteins. The way of action of these unusual dehydrins remains unrevealed. PMID:25520734
The Segmentation Problem in the Study of Impromptu Speech.
ERIC Educational Resources Information Center
Loman, Bengt
A fundamental problem in the study of spontaneous speech is how to segment it for analysis. The segments should be relevant for the study of linguistic structures, speech planning, speech production, or communication strategies. Operational rules for segmentation should consider a wide variety of criteria and be hierarchically ordered. This is…
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin
2015-03-01
We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
Proportional crosstalk correction for the segmented clover at iThemba LABS
NASA Astrophysics Data System (ADS)
Bucher, T. D.; Noncolela, S. P.; Lawrie, E. A.; Dinoko, T. R. S.; Easton, J. L.; Erasmus, N.; Lawrie, J. J.; Mthembu, S. H.; Mtshali, W. X.; Shirinda, O.; Orce, J. N.
2017-11-01
Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ-ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ-ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ-ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ-ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ-ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained.
ERIC Educational Resources Information Center
Edelstein, Wolfgang
1999-01-01
Notes that change in the moral and cognitive realms is a long-term historical process that includes progression and regression. Reconstructs the cognitive correlates of historical progress, using as examples the emergence of invariant numbers in Mesopotamia, the growth of logic and perspectivism in the early Middle Ages, and the rise of public…
NASA Astrophysics Data System (ADS)
Varga, T.; McKinney, A. L.; Bingham, E.; Handakumbura, P. P.; Jansson, C.
2017-12-01
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as in processes with important implications to farming and thus human food supply. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. Selected Brachypodium distachyon phenotypes were grown in both natural and artificial soil mixes. The specimens were imaged by XCT, and the root architectures were extracted from the data using three different software-based methods; RooTrak, ImageJ-based WEKA segmentation, and the segmentation feature in VG Studio MAX. The 3D root image was successfully segmented at 30 µm resolution by all three methods. In this presentation, ease of segmentation and the accuracy of the extracted quantitative information (root volume and surface area) will be compared between soil types and segmentation methods. The best route to easy and accurate segmentation and root analysis will be highlighted.
Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan
2014-09-01
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.
Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A
2017-07-01
Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Fredriksson, Alexandru; Trzebiatowska-Krzynska, Aleksandra; Dyverfeldt, Petter; Engvall, Jan; Ebbers, Tino; Carlhäll, Carl-Johan
2018-04-01
To assess right ventricular (RV) turbulent kinetic energy (TKE) in patients with repaired Tetralogy of Fallot (rToF) and a spectrum of pulmonary regurgitation (PR), as well as to investigate the relationship between these 4D flow markers and RV remodeling. Seventeen patients with rToF and 10 healthy controls were included in the study. Patients were divided into two groups based on PR fraction: one lower PR fraction group (≤11%) and one higher PR fraction group (>11%). Field strength/sequences: 3D cine phase contrast (4D flow), 2D cine phase contrast (2D flow), and balanced steady-state free precession (bSSFP) at 1.5T. The RV volume was segmented in the morphologic short-axis images and TKE parameters were computed inside the segmented RV volume throughout diastole. Statistical tests: One-way analysis of variance with Bonferroni post-hoc test; unpaired t-test; Pearson correlation coefficients; simple and stepwise multiple regression models; intraclass correlation coefficient (ICC). The higher PR fraction group had more remodeled RVs (140 ± 25 vs. 107 ± 22 [lower PR fraction, P < 0.01] and 93 ± 15 ml/m 2 [healthy, P < 0.001] for RV end-diastolic volume index [RVEDVI]) and higher TKE values (5.95 ± 3.15 vs. 2.23 ± 0.81 [lower PR fraction, P < 0.01] and 1.91 ± 0.78 mJ [healthy, P < 0.001] for Peak Total RV TKE). Multiple regression analysis between RVEDVI and 4D/2D flow parameters showed that Peak Total RV TKE was the strongest predictor of RVEDVI (R 2 = 0.47, P = 0.002). The 4D flow-specific TKE markers showed a slightly stronger association with RV remodeling than conventional 2D flow PR parameters. These results suggest novel hemodynamic aspects of PR in the development of late complications after ToF repair. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1043-1053. © 2017 International Society for Magnetic Resonance in Medicine.
Reconstructing Buildings with Discontinuities and Roof Overhangs from Oblique Aerial Imagery
NASA Astrophysics Data System (ADS)
Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.
2017-05-01
This paper proposes a two-stage method for the reconstruction of city buildings with discontinuities and roof overhangs from oriented nadir and oblique aerial images. To model the structures the input data is transformed into a dense point cloud, segmented and filtered with a modified marching cubes algorithm to reduce the positional noise. Assuming a monolithic building the remaining vertices are initially projected onto a 2D grid and passed to RANSAC-based regression and topology analysis to geometrically determine finite wall, ground and roof planes. If this should fail due to the presence of discontinuities the regression will be repeated on a 3D level by traversing voxels within the regularly subdivided bounding box of the building point set. For each cube a planar piece of the current surface is approximated and expanded. The resulting segments get mutually intersected yielding both topological and geometrical nodes and edges. These entities will be eliminated if their distance-based affiliation to the defining point sets is violated leaving a consistent building hull including its structural breaks. To add the roof overhangs the computed polygonal meshes are projected onto the digital surface model derived from the point cloud. Their shapes are offset equally along the edge normals with subpixel accuracy by detecting the zero-crossings of the second-order directional derivative in the gradient direction of the height bitmap and translated back into world space to become a component of the building. As soon as the reconstructed objects are finished the aerial images are further used to generate a compact texture atlas for visualization purposes. An optimized atlas bitmap is generated that allows perspectivecorrect multi-source texture mapping without prior rectification involving a partially parallel placement algorithm. Moreover, the texture atlases undergo object-based image analysis (OBIA) to detect window areas which get reintegrated into the building models. To evaluate the performance of the proposed method a proof-of-concept test on sample structures obtained from real-world data of Heligoland/Germany has been conducted. It revealed good reconstruction accuracy in comparison to the cadastral map, a speed-up in texture atlas optimization and visually attractive render results.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin
2014-05-01
Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset. Copyright © 2014 Elsevier Ltd. All rights reserved.
2003-09-11
KENNEDY SPACE CENTER, FLA. - Jeff Thon, an SRB mechanic with United Space Alliance, is fitted with a harness to test a vertical solid rocket booster propellant grain inspection technique. Thon will be lowered inside a mockup of two segments of the SRBs. The inspection of segments is required as part of safety analysis.
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias
2017-04-01
Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Brown, Susan J; Selbie, W Scott; Wallace, Eric S
2013-01-01
A common biomechanical feature of a golf swing, described in various ways in the literature, is the interaction between the thorax and pelvis, often termed the X-Factor. There is no consistent method used within golf biomechanics literature however to calculate these segment interactions. The purpose of this study was to examine X-factor data calculated using three reported methods in order to determine the similarity or otherwise of the data calculated using each method. A twelve-camera three-dimensional motion capture system was used to capture the driver swings of 19 participants and a subject specific three-dimensional biomechanical model was created with the position and orientation of each model estimated using a global optimisation algorithm. Comparison of the X-Factor methods showed significant differences for events during the swing (P < 0.05). Data for each kinematic measure were derived as a times series for all three methods and regression analysis of these data showed that whilst one method could be successfully mapped to another, the mappings between methods are subject dependent (P <0.05). Findings suggest that a consistent methodology considering the X-Factor from a joint angle approach is most insightful in describing a golf swing.
Pregnancy outcome after induction of labor in women with previous cesarean section.
Ashwal, Eran; Hiersch, Liran; Melamed, Nir; Ben-Zion, Maya; Brezovsky, Alex; Wiznitzer, Arnon; Yogev, Yariv
2015-03-01
As conflicting data exist concerning the safety of induction of labor (IoL) in women with previous single lower segment cesarean section (CS), we aimed to assess pregnancy outcome following IoL in such patient population. All singleton pregnancies with previous single CS which underwent IoL during 2008-2012 were included (study group). Their pregnancy outcome was compared to those pregnancies with previous single CS that admitted with spontaneous onset of labor (control group). Overall, 1898 pregnancies were eligible, of them, 259 underwent IoL, and 1639 were admitted with spontaneous onset of labor. Parity, gestational age at delivery and birthweight were similar. Women in the study group were more likely to undergo CS mainly due to labor dystocia (8.1 versus 3.7%, p < 0.01). The rate of CS due to non-reassuring fetal heart rate was similar. No difference was found in the rate of uterine rupture/dehiscence. Short-term neonatal outcome was similar between the groups. On multivariable logistic regression analysis, IoL was not independently associated with uterine rupture (OR 1.33, 95% C.I 0.46-3.84, p = 0.59). Our data suggest that IoL in women with one previous low segment CS neither increases the risk of uterine rupture nor adversely affects immediate neonatal outcome.
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data
NASA Astrophysics Data System (ADS)
Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus
The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.
Analysis of radially cracked ring segments subject to forces and couples
NASA Technical Reports Server (NTRS)
Gross, B.; Srawley, J. E.
1977-01-01
Results of planar boundary collocation analysis are given for ring segment (C-shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5 and ratios of crack length to segment width in the range 0.1 to 0.8.
Analysis of radially cracked ring segments subject to forces and couples
NASA Technical Reports Server (NTRS)
Gross, B.; Strawley, J. E.
1975-01-01
Results of planar boundary collocation analysis are given for ring segment (C shaped) specimens with radial cracks, subjected to combined forces and couples. Mode I stress intensity factors and crack mouth opening displacements were determined for ratios of outer to inner radius in the range 1.1 to 2.5, and ratios of crack length to segment width in the range 0.1 to 0.8.
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Bohra, Tasneem; Benmarhnia, Tarik; McKinnon, Britt; Kaufman, Jay S.
2017-01-01
Previous studies of inequality in health and mortality have largely focused on income-based inequality. Maternal education plays an important role in determining access to water and sanitation, and inequalities in child mortality arising due to differential access, especially in low- and middle-income countries such as Peru. This article aims to explain education-related inequalities in child mortality in Peru using a regression-based decomposition of the concentration index of child mortality. The analysis combines a concentration index created along a cumulative distribution of the Demographic and Health Surveys sample ranked according to maternal education, and decomposition measures the contribution of water and sanitation to educational inequalities in child mortality. We observed a large education-related inequality in child mortality and access to water and sanitation. There is a need for programs and policies in child health to focus on ensuring equity and to consider the educational stratification of the population to target the most disadvantaged segments of the population. PMID:27821698
Protection against atherogenesis with the polymer drag-reducing agent Separan AP-30.
Faruqui, F I; Otten, M D; Polimeni, P I
1987-03-01
The inhibitory effect of Separan AP-30, an anionic polyacrylamide, on atherosclerotic plaque formation in aortas of rabbits on a high (2%) cholesterol diet was tested over a period extending from 37 to 170 days. Atherogenesis was quantified morphometrically by application of a computer-assisted image analysis of histologic cross sections of the aorta. The area of vessel wall-atheroma interface, fraction of lumen occluded, and other indexes of atherogenesis were measured in each of 26 segments of aorta excised from the animals, half of which were administered injections (intravenous) of Separan three times a week. Regression analysis of the morphometric data indicates that the polyelectrolyte exerts a powerful antiatherogenic effect in all regions of the aorta, inhibiting the formation of plaque mass to less than half in the aortic arch and about one-fifth in the descending aorta as compared with the aortic plaque masses in untreated rabbits. Results are compatible with the suggestion that a novel hemodynamic principle in vivo, polymer drag reduction, might be effectively applied against atherosclerosis.
Costa e Silva, Maria do Desterro da; Guimarães, Helen Arruda; Trindade Filho, Euclides Maurício; Andreoni, Solange; Ramos, Luiz Roberto
2011-12-01
To identify factors associated with functional loss in older adults living in the urban area. A cross-sectional study was carried out with a population-based sample of 319 elderly individuals from the municipality of Maceió (Northeastern Brazil), in 2009. To obtain the functional impairment data the Brazilian Older Americans Resources and Services Multidimensional Functional Assessment Questionnaire was used. A descriptive analysis, the chi-square test and a regression analysis for crude prevalence ratio were used, and the significance level that was adopted was p <0.05. The majority of participants were females (65.8%) and the mean age was 72 years (SD = 8.83). The prevalence of moderate/severe impairment was 45.5%, and the associated factors were being 75 years old or older, having up to four years of schooling, reporting two or more chronic diseases and being single. The characteristics of the elderly with functional impairment reflect inequalities and potential impacts of this population segment on the health services.
Hewer, Micah J; Gough, William A
2016-11-01
Based on a case study of the Toronto Zoo (Canada), multivariate regression analysis, involving both climatic and social variables, was employed to assess the relationship between daily weather and visitation. Zoo visitation was most sensitive to weather variability during the shoulder season, followed by the off-season and, then, the peak season. Temperature was the most influential weather variable in relation to zoo visitation, followed by precipitation and, then, wind speed. The intensity and direction of the social and climatic variables varied between seasons. Temperatures exceeding 26 °C during the shoulder season and 28 °C during the peak season suggested a behavioural threshold associated with zoo visitation, with conditions becoming too warm for certain segments of the zoo visitor market, causing visitor numbers to decline. Even light amounts of precipitation caused average visitor numbers to decline by nearly 50 %. Increasing wind speeds also demonstrated a negative influence on zoo visitation.
Employee Choice of a High-Deductible Health Plan across Multiple Employers
Lave, Judith R; Men, Aiju; Day, Brian T; Wang, Wei; Zhang, Yuting
2011-01-01
Objective To determine factors associated with selecting a high-deductible health plan (HDHP) rather than a preferred provider plan (PPO) and to examine switching and market segmentation after initial selection. Data Sources/Study Setting Claims and benefit information for 2005–2007 from nine employers in western Pennsylvania first offering HDHP in 2006. Study Design We examined plan growth over time, used logistic regression to determine factors associated with choosing an HDHP, and examined the distribution of healthy and sick members across plan types. Data Extraction We linked employees with their dependents to determine family-level variables. We extracted risk scores, covered charges, employee age, and employee gender from claims data. We determined census-level race, education, and income information. Principal Findings Health status, gender, race, and education influenced the type of individual and family policies chosen. In the second year the HDHP was offered, few employees changed plans. Risk segmentation between HDHPs and PPOs existed, but it did not increase. Conclusions When given a choice, those who are healthier are more likely to select an HDHP leading to risk segmentation. Risk segmentation did not increase in the second year that HDHPs were offered. PMID:20849558
Pereira, Priscilla Perez da Silva; Da Mata, Fabiana A F; Figueiredo, Ana Claudia Godoy; de Andrade, Keitty Regina Cordeiro; Pereira, Maurício Gomes
2017-05-01
Smoking during pregnancy may negatively impact newborn birth weight. This study investigates the relationship between maternal active smoking during pregnancy and low birth weight in the Americas through systematic review and meta-analysis. A literature search was conducted through indexed databases and the grey literature. Case-control and cohort studies published between 1984 and 2016 conducted within the Americas were included without restriction regarding publication language. The article selection process and data extraction were performed by two independent investigators. A meta-analysis of random effects was conducted, and possible causes of between-study heterogeneity were evaluated by meta-regressions and subgroup analyses. Publication bias was assessed by visual inspection of Begg's funnel plot and by Egger's regression test. The literature search yielded 848 articles from which 34 studies were selected for systematic review and 30 for meta-analysis. Active maternal smoking was associated with low birth weight, OR = 2.00 (95% CI: 1.77-2.26; I2 = 66.3%). The funnel plot and Egger's test (p = .14) indicated no publication bias. Meta-regression revealed that sample size, study quality, and the number of confounders in the original studies did not account for the between-study heterogeneity. Subgroup analysis indicated no significant differences when studies were compared by design, sample size, and regions of the Americas. Low birth weight is associated with maternal active smoking during pregnancy regardless of the region in the Americas or the studies' methodological aspects. A previous search of the major electronic databases revealed that no studies appear to have been conducted to summarize the association between maternal active smoking during pregnancy and low birth weight within the Americas. Therefore, this systematic review may help to fill the information gap. The region of the Americas contains some of the most populous countries in the world; therefore, this study may provide useful data from this massive segment of the world's population. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Molecular Predictors of 3D Morphogenesis by Breast Cancer Cell Lines in 3D Culture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ju; Chang, Hang; Giricz, Orsi
Correlative analysis of molecular markers with phenotypic signatures is the simplest model for hypothesis generation. In this paper, a panel of 24 breast cell lines was grown in 3D culture, their morphology was imaged through phase contrast microscopy, and computational methods were developed to segment and represent each colony at multiple dimensions. Subsequently, subpopulations from these morphological responses were identified through consensus clustering to reveal three clusters of round, grape-like, and stellate phenotypes. In some cases, cell lines with particular pathobiological phenotypes clustered together (e.g., ERBB2 amplified cell lines sharing the same morphometric properties as the grape-like phenotype). Next, associationsmore » with molecular features were realized through (i) differential analysis within each morphological cluster, and (ii) regression analysis across the entire panel of cell lines. In both cases, the dominant genes that are predictive of the morphological signatures were identified. Specifically, PPAR? has been associated with the invasive stellate morphological phenotype, which corresponds to triple-negative pathobiology. PPAR? has been validated through two supporting biological assays.« less
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
NASA Technical Reports Server (NTRS)
Trenchard, M. H. (Principal Investigator)
1980-01-01
Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Factors associated with arterial stiffness in children aged 9-10 years
Batista, Milena Santos; Mill, José Geraldo; Pereira, Taisa Sabrina Silva; Fernandes, Carolina Dadalto Rocha; Molina, Maria del Carmen Bisi
2015-01-01
OBJECTIVE To analyze the factors associated with stiffness of the great arteries in prepubertal children. METHODS This study with convenience sample of 231 schoolchildren aged 9-10 years enrolled in public and private schools in Vitória, ES, Southeastern Brazil, in 2010-2011. Anthropometric and hemodynamic data, blood pressure, and pulse wave velocity in the carotid-femoral segment were obtained. Data on current and previous health conditions were obtained by questionnaire and notes on the child’s health card. Multiple linear regression was applied to identify the partial and total contribution of the factors in determining the pulse wave velocity values. RESULTS Among the students, 50.2% were female and 55.4% were 10 years old. Among those classified in the last tertile of pulse wave velocity, 60.0% were overweight, with higher mean blood pressure, waist circumference, and waist-to-height ratio. Birth weight was not associated with pulse wave velocity. After multiple linear regression analysis, body mass index (BMI) and diastolic blood pressure remained in the model. CONCLUSIONS BMI was the most important factor in determining arterial stiffness in children aged 9-10 years. PMID:25902563
Mattu, M J; Small, G W; Arnold, M A
1997-11-15
A multivariate calibration method is described in which Fourier transform near-infrared interferogram data are used to determine clinically relevant levels of glucose in an aqueous matrix of bovine serum albumin (BSA) and triacetin. BSA and triacetin are used to model the protein and triglycerides in blood, respectively, and are present in levels spanning the normal human physiological range. A full factorial experimental design is constructed for the data collection, with glucose at 10 levels, BSA at 4 levels, and triacetin at 4 levels. Gaussian-shaped band-pass digital filters are applied to the interferogram data to extract frequencies associated with an absorption band of interest. Separate filters of various widths are positioned on the glucose band at 4400 cm-1, the BSA band at 4606 cm-1, and the triacetin band at 4446 cm-1. Each filter is applied to the raw interferogram, producing one, two, or three filtered interferograms, depending on the number of filters used. Segments of these filtered interferograms are used together in a partial least-squares regression analysis to build glucose calibration models. The optimal calibration model is realized by use of separate segments of interferograms filtered with three filters centered on the glucose, BSA, and triacetin bands. Over the physiological range of 1-20 mM glucose, this 17-term model exhibits values of R2, standard error of calibration, and standard error of prediction of 98.85%, 0.631 mM, and 0.677 mM, respectively. These results are comparable to those obtained in a conventional analysis of spectral data. The interferogram-based method operates without the use of a separate background measurement and employs only a short section of the interferogram.
Tian, Maozhou; Zhu, Lingmin; Lin, Hongyang; Lin, Qiaoyan; Huang, Peng; Yu, Xiao; Jing, Yanyan
2017-01-01
High thrombus burden, subsequent distal embolization, and myocardial no-reflow remain a large obstacle that may negate the benefits of urgent coronary revascularization in patients with ST-segment elevation myocardial infarction (STEMI). However, the biological function and clinical association of Hsp-27 with thrombus burden and clinical outcomes in patients with STEMI is not clear. Consecutive patients (n = 146) having STEMI undergoing primary percutaneous coronary intervention (pPCI) within 12 hours from the onset of symptoms were enrolled in this prospective study in the Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, Shangdong, P.R. China. Patients were divided into low thrombus burden and high thrombus burden groups. The present study demonstrated that patients with high-thrombus burden had higher plasma Hsp-27 levels ([32.0 ± 8.6 vs. 58.0 ± 12.3] ng/mL, P < 0.001). The median value of Hsp-27 levels in all patients with STEMI was 45 ng/mL. Using the receiver operating characteristic (ROC) curve analysis, plasma Hsp-27 levels were of significant diagnostic value for high thrombus burden (AUC, 0.847; 95% CI, 0.775–0.918; P < 0.01). The multivariate cox regression analysis demonstrated that Hsp-27 > 45 ng/mL (HR 2.801, 95% CI 1.296–4.789, P = 0.001), were positively correlated with the incidence of major adverse cardiovascular events (MACE). Kaplan-Meier survival analysis demonstrated that MACE-free survival at 180-day follow-up was significantly lower in patients with Hsp-27 > 45 ng/mL (log rank = 10.28, P < 0.001). Our data demonstrate that plasma Hsp-27 was positively correlated with high thrombus burden and the incidence of MACE in patients with STEMI who underwent pPCI. PMID:29088740
FFDM image quality assessment using computerized image texture analysis
NASA Astrophysics Data System (ADS)
Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina
2010-04-01
Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.
Van Epps, J Scott; Chew, Douglas W; Vorp, David A
2009-10-01
Certain arteries (e.g., coronary, femoral, etc.) are exposed to cyclic flexure due to their tethering to surrounding tissue beds. It is believed that such stimuli result in a spatially variable biomechanical stress distribution, which has been implicated as a key modulator of remodeling associated with atherosclerotic lesion localization. In this study we utilized a combined ex vivo experimental/computational methodology to address the hypothesis that local variations in shear and mural stress associated with cyclic flexure influence the distribution of early markers of atherogenesis. Bilateral porcine femoral arteries were surgically harvested and perfused ex vivo under pulsatile arterial conditions. One of the paired vessels was exposed to cyclic flexure (0-0.7 cm(-1)) at 1 Hz for 12 h. During the last hour, the perfusate was supplemented with Evan's blue dye-labeled albumin. A custom tissue processing protocol was used to determine the spatial distribution of endothelial permeability, apoptosis, and proliferation. Finite element and computational fluid dynamics techniques were used to determine the mural and shear stress distributions, respectively, for each perfused segment. Biological data obtained experimentally and mechanical stress data estimated computationally were combined in an experiment-specific manner using multiple linear regression analyses. Arterial segments exposed to cyclic flexure had significant increases in intimal and medial apoptosis (3.42+/-1.02 fold, p=0.029) with concomitant increases in permeability (1.14+/-0.04 fold, p=0.026). Regression analyses revealed specific mural stress measures including circumferential stress at systole, and longitudinal pulse stress were quantitatively correlated with the distribution of permeability and apoptosis. The results demonstrated that local variation in mechanical stress in arterial segments subjected to cyclic flexure indeed influence the extent and spatial distribution of the early atherogenic markers. In addition, the importance of including mural stresses in the investigation of vascular mechanopathobiology was highlighted. Specific example results were used to describe a potential mechanism by which systemic risk factors can lead to a heterogeneous disease.
Small rural hospitals: an example of market segmentation analysis.
Mainous, A G; Shelby, R L
1991-01-01
In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.
NASA Astrophysics Data System (ADS)
Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana
2017-11-01
Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.
A study of riders' noise exposure on Bay Area Rapid Transit trains.
Dinno, Alexis; Powell, Cynthia; King, Margaret Mary
2011-02-01
Excessive noise exposure may present a hazard to hearing, cardiovascular, and psychosomatic health. Mass transit systems, such as the Bay Area Rapid Transit (BART) system, are potential sources of excessive noise. The purpose of this study was to characterize transit noise and riders' exposure to noise on the BART system using three dosimetry metrics. We made 268 dosimetry measurements on a convenience sample of 51 line segments. Dosimetry measures were modeled using linear and nonlinear multiple regression as functions of average velocity, tunnel enclosure, flooring, and wet weather conditions and presented visually on a map of the BART system. This study provides evidence of levels of hazardous levels of noise exposure in all three dosimetry metrics. L(eq) and L(max) measures indicate exposures well above ranges associated with increased cardiovascular and psychosomatic health risks in the published literature. L(peak) indicate acute exposures hazardous to adult hearing on about 1% of line segment rides and acute exposures hazardous to child hearing on about 2% of such rides. The noise to which passengers are exposed may be due to train-specific conditions (velocity and flooring), but also to rail conditions (velocity and tunnels). These findings may point at possible remediation (revised speed limits on longer segments and those segments enclosed by tunnels). The findings also suggest that specific rail segments could be improved for noise.
NASA Astrophysics Data System (ADS)
Li, Shouju; Shangguan, Zichang; Cao, Lijuan
A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.
Contreras-Gutiérrez, María Angélica; Nunes, Marcio R.T.; Guzman, Hilda; Uribe, Sandra; Gómez, Juan Carlos Gallego; Vasco, Juan David Suaza; Cardoso, Jedson F.; Popov, Vsevolod L.; Widen, Steven G.; Wood, Thomas G.; Vasilakis, Nikos; Tesh, Robert B.
2016-01-01
The genome and structural organization of a novel insect-specific orthomyxovirus, designated Sinu virus, is described. Sinu virus (SINUV) was isolated in cultures of C6/36 cells from a pool of mosquitoes collected in northwestern Colombia. The virus has six negative-sense ssRNA segments. Genetic analysis of each segment demonstrated the presence of six distinct ORFs encoding the following genes: PB2 (Segment 1), PB1, (Segment 2), PA protein (Segment 3), envelope GP gene (Segment 4), the NP (Segment 5), and M-like gene (Segment 6). Phylogenetically, SINUV appears to be most closed related to viruses in the genus Thogotovirus. PMID:27936462
Improving Situational Awareness on Submarines Using Augmented Reality
2008-09-01
Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY...40 2. Contact Management Segment ............................................ 41 3. Ascent Segment...COGNITIVE TASK ANALYSIS (CONTACT MANAGEMENT SEGMENT
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.
[The Role of Segmental Analysis of Clonazepam in Hair in Drug Facilitated Cases].
Chen, H; Xiang, P; Shen, M
2017-06-01
To infer the frequency of dosage and medication history investigate of the victims in drug facilitated cases by the segmental analysis of clonazepam in hair. Freezing milling under liquid nitrogen environment combined with ultrasonic bath was used as sample pretreatment in this study, and liquid chromatography-tandem mass spectrometry was used for the segmental analysis of the hair samples collected from 6 victims in different cases. The concentrations of clonazepam and 7-aminoclonazepam were detected in each hair section. Clonazepam and its metabolite 7-aminoclonazepam were detected in parts of hair sections from the 6 victims. The occurrence time of drug peak concentration was consistent with the intake timing provided by victims. Segmental analysis of hair can provide the information of frequency of dosage and intake timing, which shows an unique evidential value in drug facilitated crimes. Copyright© by the Editorial Department of Journal of Forensic Medicine
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Drăguţ, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.
Guo, Yanyong; Li, Zhibin; Wu, Yao; Xu, Chengcheng
2018-06-01
Bicyclists running the red light at crossing facilities increase the potential of colliding with motor vehicles. Exploring the contributing factors could improve the prediction of running red-light probability and develop countermeasures to reduce such behaviors. However, individuals could have unobserved heterogeneities in running a red light, which make the accurate prediction more challenging. Traditional models assume that factor parameters are fixed and cannot capture the varying impacts on red-light running behaviors. In this study, we employed the full Bayesian random parameters logistic regression approach to account for the unobserved heterogeneous effects. Two types of crossing facilities were considered which were the signalized intersection crosswalks and the road segment crosswalks. Electric and conventional bikes were distinguished in the modeling. Data were collected from 16 crosswalks in urban area of Nanjing, China. Factors such as individual characteristics, road geometric design, environmental features, and traffic variables were examined. Model comparison indicates that the full Bayesian random parameters logistic regression approach is statistically superior to the standard logistic regression model. More red-light runners are predicted at signalized intersection crosswalks than at road segment crosswalks. Factors affecting red-light running behaviors are gender, age, bike type, road width, presence of raised median, separation width, signal type, green ratio, bike and vehicle volume, and average vehicle speed. Factors associated with the unobserved heterogeneity are gender, bike type, signal type, separation width, and bike volume. Copyright © 2018 Elsevier Ltd. All rights reserved.
Spinal anesthesia: a comparison of procaine and lidocaine.
Le Truong, H H; Girard, M; Drolet, P; Grenier, Y; Boucher, C; Bergeron, L
2001-05-01
To compare spinal procaine to spinal lidocaine with regard to their main clinical characteristics and incidence of transient radicular irritation (TRI). In this randomized, double-blind, prospective study, patients (two groups, n=30 each) received either 100 mg of lidocaine 5% in 7.5% glucose (Group L) or 100 mg of procaine 10% diluted with 1 ml cerebrospinal fluid (Group P). After spinal anesthesia, segmental level of sensory block was assessed by pinprick. Blood pressure and the height of the block were noted each minute for the first ten minutes, then every three minutes for the next 35 min and finally every five minutes until regression of the block to L4. Motor blockade was evaluated using the Bromage scale. To evaluate the presence of TRI, each patient was questioned 48 hr after surgery. Time to highest sensory level and to maximum number of segments blocked showed no difference between groups. Mean time for sensory regression to T10 and for regression of the motor block were shorter in Group P. Eighty minutes following injection, sensory levels were lower in Group P. Five patients had inadequate surgical anesthesia in Group P and only one in Group L. No patient in Group P had TRI (95% CI 10-12%) while eight (27%) in Group L did (95% CI 12-46%). Procaine 10% was associated with a clinical failure rate of 14.2%. This characteristic must be balanced against an absence of TRI, which occurs more frequently with the use of lidocaine 5%.
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawant, A; Modiri, A; Bland, R
Purpose: Post-treatment radiation injury to central and peripheral airways is a potentially important, yet under-investigated determinant of toxicity in lung stereotactic ablative radiotherapy (SAbR). We integrate virtual bronchoscopy technology into the radiotherapy planning process to spatially map and quantify the radiosensitivity of bronchial segments, and propose novel IMRT planning that limits airway dose through non-isotropic intermediate- and low-dose spillage. Methods: Pre- and ∼8.5 months post-SAbR diagnostic-quality CT scans were retrospectively collected from six NSCLC patients (50–60Gy in 3–5 fractions). From each scan, ∼5 branching levels of the bronchial tree were segmented using LungPoint, a virtual bronchoscopic navigation system. The pre-SAbRmore » CT and the segmented bronchial tree were imported into the Eclipse treatment planning system and deformably registered to the planning CT. The five-fraction equivalent dose from the clinically-delivered plan was calculated for each segment using the Universal Survival Curve model. The pre- and post-SAbR CTs were used to evaluate radiation-induced segmental collapse. Two of six patients exhibited significant segmental collapse with associated atelectasis and fibrosis, and were re-planned using IMRT. Results: Multivariate stepwise logistic regression over six patients (81 segments) showed that D0.01cc (minimum point dose within the 0.01cc receiving highest dose) was a significant independent factor associated with collapse (odds-ratio=1.17, p=0.010). The D0.01cc threshold for collapse was 57Gy, above which, collapse rate was 45%. In the two patients exhibiting segmental collapse, 22 out of 32 segments showed D0.01cc >57Gy. IMRT re-planning reduced D0.01cc below 57Gy in 15 of the 22 segments (68%) while simultaneously achieving the original clinical plan objectives for PTV coverage and OAR-sparing. Conclusion: Our results indicate that the administration of lung SAbR can Result in significant injury to bronchial segments, potentially impairing post-SAbR lung function. To our knowledge, this is the first investigation of functional avoidance based on mapping and minimizing dose to individual bronchial segments. The presenting author receives research funding from Varian Medical Systems, Elekta, and VisionRT.« less
3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Padma, A; Sukanesh, R
2013-01-01
A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.
En face spectral domain optical coherence tomography analysis of lamellar macular holes.
Clamp, Michael F; Wilkes, Geoff; Leis, Laura S; McDonald, H Richard; Johnson, Robert N; Jumper, J Michael; Fu, Arthur D; Cunningham, Emmett T; Stewart, Paul J; Haug, Sara J; Lujan, Brandon J
2014-07-01
To analyze the anatomical characteristics of lamellar macular holes using cross-sectional and en face spectral domain optical coherence tomography. Forty-two lamellar macular holes were retrospectively identified for analysis. The location, cross-sectional length, and area of lamellar holes were measured using B-scans and en face imaging. The presence of photoreceptor inner segment/outer segment disruption and the presence or absence of epiretinal membrane formation were recorded. Forty-two lamellar macular holes were identified. Intraretinal splitting occurred within the outer plexiform layer in 97.6% of eyes. The area of intraretinal splitting in lamellar holes did not correlate with visual acuity. Eyes with inner segment/outer segment disruption had significantly worse mean logMAR visual acuity (0.363 ± 0.169; Snellen = 20/46) than in eyes without inner segment/outer segment disruption (0.203 ± 0.124; Snellen = 20/32) (analysis of variance, P = 0.004). Epiretinal membrane was present in 34 of 42 eyes (81.0%). En face imaging allowed for consistent detection and quantification of intraretinal splitting within the outer plexiform layer in patients with lamellar macular holes, supporting the notion that an area of anatomical weakness exists within Henle's fiber layer, presumably at the synaptic connection of these fibers within the outer plexiform layer. However, the en face area of intraretinal splitting did not correlate with visual acuity, disruption of the inner segment/outer segment junction was associated with significantly worse visual acuity in patients with lamellar macular holes.
de Freitas, Carolina; Ruggeri, Marco; Manns, Fabrice; Ho, Arthur; Parel, Jean-Marie
2013-01-15
We present a method for measuring the average group refractive index of the human crystalline lens in vivo using an optical coherence tomography (OCT) system which, allows full-length biometry of the eye. A series of OCT images of the eye including the anterior segment and retina were recorded during accommodation. Optical lengths of the anterior chamber, lens, and vitreous were measured dynamically along the central axis on the OCT images. The group refractive index of the crystalline lens along the central axis was determined using linear regression analysis of the intraocular optical length measurements. Measurements were acquired on three subjects of age 21, 24, and 35 years. The average group refractive index for the three subjects was, respectively, n=1.41, 1.43, and 1.39 at 835 nm.
Research in the application of spectral data to crop identification and assessment, volume 2
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T. (Principal Investigator); Hixson, M. M.; Bauer, M. E.
1980-01-01
The development of spectrometry crop development stage models is discussed with emphasis on models for corn and soybeans. One photothermal and four thermal meteorological models are evaluated. Spectral data were investigated as a source of information for crop yield models. Intercepted solar radiation and soil productivity are identified as factors related to yield which can be estimated from spectral data. Several techniques for machine classification of remotely sensed data for crop inventory were evaluated. Early season estimation, training procedures, the relationship of scene characteristics to classification performance, and full frame classification methods were studied. The optimal level for combining area and yield estimates of corn and soybeans is assessed utilizing current technology: digital analysis of LANDSAT MSS data on sample segments to provide area estimates and regression models to provide yield estimates.
Child Schooling in Ethiopia: The Role of Maternal Autonomy.
Gebremedhin, Tesfaye Alemayehu; Mohanty, Itismita
2016-01-01
This paper examines the effects of maternal autonomy on child schooling outcomes in Ethiopia using a nationally representative Ethiopian Demographic and Health survey for 2011. The empirical strategy uses a Hurdle Negative Binomial Regression model to estimate years of schooling. An ordered probit model is also estimated to examine age grade distortion using a trichotomous dependent variable that captures three states of child schooling. The large sample size and the range of questions available in this dataset allow us to explore the influence of individual and household level social, economic and cultural factors on child schooling. The analysis finds statistically significant effects of maternal autonomy variables on child schooling in Ethiopia. The roles of maternal autonomy and other household-level factors on child schooling are important issues in Ethiopia, where health and education outcomes are poor for large segments of the population.
Customer Churn Prediction for Broadband Internet Services
NASA Astrophysics Data System (ADS)
Huang, B. Q.; Kechadi, M.-T.; Buckley, B.
Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.
Automated Agatston score computation in non-ECG gated CT scans using deep learning
NASA Astrophysics Data System (ADS)
Cano-Espinosa, Carlos; González, Germán.; Washko, George R.; Cazorla, Miguel; San José Estépar, Raúl
2018-03-01
Introduction: The Agatston score is a well-established metric of cardiovascular disease related to clinical outcomes. It is computed from CT scans by a) measuring the volume and intensity of the atherosclerotic plaques and b) aggregating such information in an index. Objective: To generate a convolutional neural network that inputs a non-contrast chest CT scan and outputs the Agatston score associated with it directly, without a prior segmentation of Coronary Artery Calcifications (CAC). Materials and methods: We use a database of 5973 non-contrast non-ECG gated chest CT scans where the Agatston score has been manually computed. The heart of each scan is cropped automatically using an object detector. The database is split in 4973 cases for training and 1000 for testing. We train a 3D deep convolutional neural network to regress the Agatston score directly from the extracted hearts. Results: The proposed method yields a Pearson correlation coefficient of r = 0.93; p <= 0.0001 against manual reference standard in the 1000 test cases. It further stratifies correctly 72.6% of the cases with respect to standard risk groups. This compares to more complex state-of-the-art methods based on prior segmentations of the CACs, which achieve r = 0.94 in ECG-gated pulmonary CT. Conclusions: A convolutional neural network can regress the Agatston score from the image of the heart directly, without a prior segmentation of the CACs. This is a new and simpler paradigm in the Agatston score computation that yields similar results to the state-of-the-art literature.
Shi, Yuyan; Sears, Lindsay E; Coberley, Carter R; Pope, James E
2013-04-01
Adverse health and productivity outcomes have imposed a considerable economic burden on employers. To facilitate optimal worksite intervention designs tailored to differing employee risk levels, the authors established cutoff points for an Individual Well-Being Score (IWBS) based on a global measure of well-being. Cross-sectional associations between IWBS and adverse health and productivity outcomes, including high health care cost, emergency room visits, short-term disability days, absenteeism, presenteeism, low job performance ratings, and low intentions to stay with the employer, were studied in a sample of 11,702 employees from a large employer. Receiver operating characteristics curves were evaluated to detect a single optimal cutoff value of IWBS for predicting 2 or more adverse outcomes. More granular segmentation was achieved by computing relative risks of each adverse outcome from logistic regressions accounting for sociodemographic characteristics. Results showed strong and significant nonlinear associations between IWBS and health and productivity outcomes. An IWBS of 75 was found to be the optimal single cutoff point to discriminate 2 or more adverse outcomes. Logistic regression models found abrupt reductions of relative risk also clustered at IWBS cutoffs of 53, 66, and 88, in addition to 75, which segmented employees into high, high-medium, medium, low-medium, and low risk groups. To determine validity and generalizability, cutoff values were applied in a smaller employee population (N=1853) and confirmed significant differences between risk groups across health and productivity outcomes. The reported segmentation of IWBS into discrete cohorts based on risk of adverse health and productivity outcomes should facilitate well-being comparisons and worksite interventions.
Poetzsch, Michael; Baumgartner, Markus R; Steuer, Andrea E; Kraemer, Thomas
2015-02-01
Segmental hair analysis has been used for monitoring changes of consumption habit of drugs. Contamination from the environment or sweat might cause interpretative problems. For this reason, hair analysis results were compared in hair samples taken 24 h and 30 days after a single tilidine dose. The 24-h hair samples already showed high concentrations of tilidine and nortilidine. Analysis of wash water from sample preparation confirmed external contamination by sweat as reason. The 30-day hair samples were still positive for tilidine in all segments. Negative wash-water analysis proved incorporation from sweat into the hair matrix. Interpretation of a forensic case was requested where two children had been administered tilidine by their nanny and tilidine/nortilidine had been detected in all hair segments, possibly indicating multiple applications. Taking into consideration the results of the present study and of MALDI-MS imaging, a single application as cause for analytical results could no longer be excluded. Interpretation of consumption behaviour of tilidine based on segmental hair analysis has to be done with caution, even after typical wash procedures during sample preparation. External sweat contamination followed by incorporation into the hair matrix can mimic chronic intake. For assessment of external contamination, hair samples should not only be collected several weeks but also one to a few days after intake. MALDI-MS imaging of single hair can be a complementary tool for interpretation. Limitations for interpretation of segmental hair analysis shown here might also be applicable to drugs with comparable physicochemical and pharmacokinetic properties. Copyright © 2014 John Wiley & Sons, Ltd.
Angler segmentation using perceptions of experiential quality in the Great Barrier Reef Marine Park
William Smith; Gerard Kyle; Stephen G. Sutton
2012-01-01
This study investigated the efficacy of segmenting anglers using their perceptions of trip quality in the Great Barrier Reef Marine Park (GBRMP). Analysis revealed five segments of anglers whose perceptions differed on trip quality.We named the segments: slow action, plenty of action, weather sensitive, gloomy gusses, and ok corral and assessed variation among them...
2003-09-11
Jeff Thon, an SRB mechanic with United Space Alliance, is lowered into a mockup of a segment of a solid rocket booster. He is testing a technique for vertical SRB propellant grain inspection. The inspection of segments is required as part of safety analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-12
... with the National Forest Management Act (``NFMA'') in connection with its analysis and designation of... motorized travel that have some segment(s) that go through meadows. The purpose of the current analysis is...
NASA Astrophysics Data System (ADS)
Yang, Yunyun; Kong, Weibo; Yuan, Ye; Zhou, Changlin; Cai, Xufu
2018-04-01
Novel poly(carbonate-co-amide) (PCA) block copolymers are prepared with polycarbonate diol (PCD) as soft segments, polyamide-6 (PA6) as hard segments and 4,4'-diphenylmethane diisocyanate (MDI) as coupling agent through reactive processing. The reactive processing strategy is eco-friendly and resolve the incompatibility between polyamide segments and PCD segments in preparation processing. The chemical structure, crystalline properties, thermal properties, mechanical properties and water resistance were extensively studied by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Differential scanning calorimetry (DSC), Thermal gravity analysis (TGA), Dynamic mechanical analysis (DMA), tensile testing, water contact angle and water absorption, respectively. The as-prepared PCAs exhibit obvious microphase separation between the crystalline hard PA6 phase and amorphous PCD soft segments. Meanwhile, PCAs showed outstanding mechanical with the maximum tensile strength of 46.3 MPa and elongation at break of 909%. The contact angle and water absorption results indicate that PCAs demonstrate outstanding water resistance even though possess the hydrophilic surfaces. The TGA measurements prove that the thermal stability of PCA can satisfy the requirement of multiple-processing without decomposition.
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.
Würflinger, T; Gamper, I; Aach, T; Sechi, A S
2011-01-01
Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe
2015-01-01
Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Batman, C; Ozdamar, Y
2010-07-01
To report the outcomes of the use of intracameral bevacizumab for iris neovascularization occurring after silicone oil (SO) removal in eyes undergoing vitreoretinal surgery (VRS). This study included 12 eyes that had iris neovascularization after SO removal. The clinical outcomes of 12 eyes after intravitreal bevacizumab injection were reviewed. There were eight men and four women with an average age of 41.58+/-12.68 years. All eyes had VRS for various vitreoretinal diseases. After the mean follow-up period of 9.7+/-5.3 months, SO removal was performed. Then, the patients were followed for more than 2 months and detailed retinal examinations and intraocular pressure (IOP) were normal during this period, but rubeosis iridis (RI) developed. RI was treated with 1 dose of 1.25 mg bevacizumab into the anterior chamber. After a mean follow-up period of 4.8+/-2.2 months, the regression of iris neovacularization was detected and IOP was below 21 mmHg in all eyes. Anterior segment neovascularization (ASNV) may develop through various mechanisms in patients with VRS after SO removal, and anterior chamber injection of bevacizumab may lead to regression of ASNV.
Barros, L M; Martins, R T; Ferreira-Keppler, R L; Gutjahr, A L N
2017-08-04
Information on biomass is substantial for calculating growth rates and may be employed in the medicolegal and economic importance of Hermetia illucens (Linnaeus, 1758). Although biomass is essential to understanding many ecological processes, it is not easily measured. Biomass may be determined by directly weighing or indirectly through regression models of fresh/dry mass versus body dimensions. In this study, we evaluated the association between morphometry and fresh/dry mass of immature H. illucens using linear, exponential, and power regression models. We measured width and length of the cephalic capsule, overall body length, and width of the largest abdominal segment of 280 larvae. Overall body length and width of the largest abdominal segment were the best predictors for biomass. Exponential models best fitted body dimensions and biomass (both fresh and dry), followed by power and linear models. In all models, fresh and dry biomass were strongly correlated (>75%). Values estimated by the models did not differ from observed ones, and prediction power varied from 27 to 79%. Accordingly, the correspondence between biomass and body dimensions should facilitate and motivate the development of applied studies involving H. illucens in the Amazon region.
NASA Astrophysics Data System (ADS)
Ukwatta, E.; Awad, J.; Ward, A. D.; Samarabandu, J.; Krasinski, A.; Parraga, G.; Fenster, A.
2011-03-01
Three-dimensional ultrasound (3D US) vessel wall volume (VWV) measurements provide high measurement sensitivity and reproducibility for the monitoring and assessment of carotid atherosclerosis. In this paper, we describe a semiautomated approach based on the level set method to delineate the media-adventitia and lumen boundaries of the common carotid artery from 3D US images to support the computation of VWV. Due to the presence of plaque and US image artifacts, the carotid arteries are challenging to segment using image information alone. Our segmentation framework combines several image cues with domain knowledge and limited user interaction. Our method was evaluated with respect to manually outlined boundaries on 430 2D US images extracted from 3D US images of 30 patients who have carotid stenosis of 60% or more. The VWV given by our method differed from that given by manual segmentation by 6.7% +/- 5.0%. For the media-adventitia and lumen segmentations, respectively, our method yielded Dice coefficients of 95.2% +/- 1.6%, 94.3% +/- 2.6%, mean absolute distances of 0.3 +/- 0.1 mm, 0.2 +/- 0.1 mm, maximum absolute distances of 0.8 +/- 0.4 mm, 0.6 +/- 0.3 mm, and volume differences of 4.2% +/- 3.1%, 3.4% +/- 2.6%. The realization of a semi-automated segmentation method will accelerate the translation of 3D carotid US to clinical care for the rapid, non-invasive, and economical monitoring of atherosclerotic disease progression and regression during therapy.
Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model
Yang, Xin; Jin, Jiaoying; Xu, Mengling; Wu, Huihui; He, Wanji; Yuchi, Ming; Ding, Mingyue
2013-01-01
Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM) is developed and evaluated to outline common carotid artery (CCA) for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB) and lumen-intima-boundary (LIB) on transverse views slices from three-dimensional ultrasound (3D US) images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD) of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD) of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression. PMID:23533535
Subregional effects of meniscal tears on cartilage loss over 2 years in knee osteoarthritis.
Chang, Alison; Moisio, Kirsten; Chmiel, Joan S; Eckstein, Felix; Guermazi, Ali; Almagor, Orit; Cahue, September; Wirth, Wolfgang; Prasad, Pottumarthi; Sharma, Leena
2011-01-01
Meniscal tears have been linked to knee osteoarthritis progression, presumably by impaired load attenuation. How meniscal tears affect osteoarthritis is unclear; subregional examination may help to elucidate whether the impact is local. This study examined the association between a tear within a specific meniscal segment and subsequent 2-year cartilage loss in subregions that the torn segment overlies. Participants with knee osteoarthritis underwent bilateral knee MRI at baseline and 2 years. Mean cartilage thickness within each subregion was quantified. Logistic regression with generalised estimating equations were used to analyse the relationship between baseline meniscal tear in each segment and baseline to 2-year cartilage loss in each subregion, adjusting for age, gender, body mass index, tear in the other two segments and extrusion. 261 knees were studied in 159 individuals. Medial meniscal body tear was associated with cartilage loss in external subregions and in central and anterior tibial subregions, and posterior horn tear specifically with posterior tibial subregion loss; these relationships were independent of tears in the other segments and persisted in tibial subregions after adjustment for extrusion. Lateral meniscal body and posterior horn tear were also associated with cartilage loss in underlying subregions but not after adjustment for extrusion. Cartilage loss in the internal subregions, not covered by the menisci, was not associated with meniscal tear in any segment. These results suggest that the detrimental effect of meniscal tears is not spatially uniform across the tibial and femoral cartilage surfaces and that some of the effect is experienced locally.
Vessel discoloration detection in malarial retinopathy
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Barriga, S.; Soliz, P.; MacCormick, I.; Taylor, T.; Harding, S.; Lewallen, S.; Joshi, V.
2016-03-01
Cerebral malaria (CM) is a life-threatening clinical syndrome associated with malarial infection. It affects approximately 200 million people, mostly sub-Saharan African children under five years of age. Malarial retinopathy (MR) is a condition in which lesions such as whitening and vessel discoloration that are highly specific to CM appear in the retina. Other unrelated diseases can present with symptoms similar to CM, therefore the exact nature of the clinical symptoms must be ascertained in order to avoid misdiagnosis, which can lead to inappropriate treatment and, potentially, death. In this paper we outline the first system to detect the presence of discolored vessels associated with MR as a means to improve the CM diagnosis. We modified and improved our previous vessel segmentation algorithm by incorporating the `a' channel of the CIELab color space and noise reduction. We then divided the segmented vasculature into vessel segments and extracted features at the wall and in the centerline of the segment. Finally, we used a regression classifier to sort the segments into discolored and not-discolored vessel classes. By counting the abnormal vessel segments in each image, we were able to divide the analyzed images into two groups: normal and presence of vessel discoloration due to MR. We achieved an accuracy of 85% with sensitivity of 94% and specificity of 67%. In clinical practice, this algorithm would be combined with other MR retinal pathology detection algorithms. Therefore, a high specificity can be achieved. By choosing a different operating point in the ROC curve, our system achieved sensitivity of 67% with specificity of 100%.
Segmentation of the common carotid artery with active shape models from 3D ultrasound images
NASA Astrophysics Data System (ADS)
Yang, Xin; Jin, Jiaoying; He, Wanji; Yuchi, Ming; Ding, Mingyue
2012-03-01
Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, we develop and evaluate a new segmentation method for outlining both lumen and adventitia (inner and outer walls) of common carotid artery (CCA) from three-dimensional ultrasound (3D US) images for carotid atherosclerosis diagnosis and evaluation. The data set consists of sixty-eight, 17× 2× 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80mg atorvastain and nine with placebo), who had carotid stenosis of 60% or more, at baseline and after three months of treatment. We investigate the use of Active Shape Models (ASMs) to segment CCA inner and outer walls after statin therapy. The proposed method was evaluated with respect to expert manually outlined boundaries as a surrogate for ground truth. For the lumen and adventitia segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC) of 93.6%+/- 2.6%, 91.8%+/- 3.5%, mean absolute distances (MAD) of 0.28+/- 0.17mm and 0.34 +/- 0.19mm, maximum absolute distances (MAXD) of 0.87 +/- 0.37mm and 0.74 +/- 0.49mm. The proposed algorithm took 4.4 +/- 0.6min to segment a single 3D US images, compared to 11.7+/-1.2min for manual segmentation. Therefore, the method would promote the translation of carotid 3D US to clinical care for the fast, safety and economical monitoring of the atherosclerotic disease progression and regression during therapy.
Lord, Bill; Jennings, Paul A; Smith, Karen
2017-12-01
Children are at risk of inadequate analgesia due to paramedics' inexperience in assessing children and challenges in administering analgesics when the patient is distressed and uncooperative. This study reports on the outcome of a change to practice guidelines that added intranasal fentanyl and intramuscular morphine within a large statewide ambulance service. This retrospective study included patients younger than 15 years treated by paramedics between January 2008 and December 2011. The primary outcome of interest was the proportion of patients having a 2/10 or greater reduction in pain severity score using an 11-point Verbal Numeric Rating Scale before and after the intervention. Segmented regression analysis was used to estimate the effect of the intervention over time. A multiple regression model calculated odds ratios with 95% confidence intervals. A total of 92,378 children were transported by paramedics during the study period, with 9833 cases included in the analysis. The median age was 11 years; 61.6% were male. Before the intervention, 88.1% (n = 3114) of children receiving analgesia had a reduction of pain severity of 2 or more points, with 94.2% (n = 5933) achieving this benchmark after intervention (P < 0.0001). The odds of a reduction in pain of 2 or more points increased by 1.01 per month immediately before the intervention and 2.33 after intervention (<0.0001). This large study of a system-wide clinical practice guideline change has demonstrated a significant improvement in the outcome of interest. However, a proportion of children with moderate to severe pain did not receive analgesia.
NASA Astrophysics Data System (ADS)
Fujiki, Shogoro; Okada, Kei-ichi; Nishio, Shogo; Kitayama, Kanehiro
2016-09-01
We developed a new method to estimate stand ages of secondary vegetation in the Bornean montane zone, where local people conduct traditional shifting cultivation and protected areas are surrounded by patches of recovering secondary vegetation of various ages. Identifying stand ages at the landscape level is critical to improve conservation policies. We combined a high-resolution satellite image (WorldView-2) with time-series Landsat images. We extracted stand ages (the time elapsed since the most recent slash and burn) from a change-detection analysis with Landsat time-series images and superimposed the derived stand ages on the segments classified by object-based image analysis using WorldView-2. We regarded stand ages as a response variable, and object-based metrics as independent variables, to develop regression models that explain stand ages. Subsequently, we classified the vegetation of the target area into six age units and one rubber plantation unit (1-3 yr, 3-5 yr, 5-7 yr, 7-30 yr, 30-50 yr, >50 yr and 'rubber plantation') using regression models and linear discriminant analyses. Validation demonstrated an accuracy of 84.3%. Our approach is particularly effective in classifying highly dynamic pioneer vegetation younger than 7 years into 2-yr intervals, suggesting that rapid changes in vegetation canopies can be detected with high accuracy. The combination of a spectral time-series analysis and object-based metrics based on high-resolution imagery enabled the classification of dynamic vegetation under intensive shifting cultivation and yielded an informative land cover map based on stand ages.
Chaberny, Iris F; Schwab, Frank; Ziesing, Stefan; Suerbaum, Sebastian; Gastmeier, Petra
2008-12-01
To determine whether a routine admission screening in surgical wards and intensive care units (ICUs) was effective in reducing methicillin-resistant Staphylococcus aureus (MRSA) infections-particularly nosocomial MRSA infections-for the whole hospital. The study used a single-centre prospective quasi-experimental design to evaluate the effect of the MRSA screening policy on the incidence density of MRSA-infected/nosocomial MRSA-infected patients/1000 patient-days (pd) in the whole hospital. The effect on incidence density was calculated by a segmented regression analysis of interrupted time series with 30 months prior to and 24 months after a 6 month implementation period. The MRSA screening policy had a highly significant hospital-wide effect on the incidence density of MRSA infections. It showed a significant change in both level [-0.163 MRSA-infected patients/1000 pd, 95% confidence interval (CI): -0.276 to -0.050] and slope (-0.01 MRSA-infected patients/1000 pd per month, 95% CI: -0.018 to -0.003) after the implementation of the MRSA screening policy. A decrease in the MRSA infections by 57% is a conservative estimate of the reduction between the last month before (0.417 MRSA-infected patients/1000 pd) and month 24 after the implementation of the MRSA screening policy (0.18 MRSA-infected patients/1000 pd). Equivalent results were found in the analysis of nosocomial MRSA-infected patients/1000 pd. This is the first hospital-wide study that investigates the impact of introducing admission screening in ICUs and non-ICUs as a single intervention to prevent MRSA infections performed with a time-series regression analysis. Admission screening is a potent tool in controlling the spread of MRSA infections in hospitals.
Structural constraints in the packaging of bluetongue virus genomic segments
Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C.
2014-01-01
The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by biochemical data analysis suggested that a conformational motif formed by interaction of the 5′ and 3′ ends of the molecule was necessary and sufficient for packaging. A similar structural signal was also identified in S8 of BTV serotype 1. Furthermore, the same conformational analysis of secondary structures for positive-sense ssRNAs was used to generate a chimeric segment that maintained the putative packaging motif but contained unrelated internal sequences. This chimeric segment was packaged successfully, confirming that the motif identified directs the correct packaging of the segment. PMID:24980574
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Mathematical Analysis of Space Radiator Segmenting for Increased Reliability and Reduced Mass
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2001-01-01
Spacecraft for long duration deep space missions will need to be designed to survive micrometeoroid bombardment of their surfaces some of which may actually be punctured. To avoid loss of the entire mission the damage due to such punctures must be limited to small, localized areas. This is especially true for power system radiators, which necessarily feature large surface areas to reject heat at relatively low temperature to the space environment by thermal radiation. It may be intuitively obvious that if a space radiator is composed of a large number of independently operating segments, such as heat pipes, a random micrometeoroid puncture will result only in the loss of the punctured segment, and not the entire radiator. Due to the redundancy achieved by independently operating segments, the wall thickness and consequently the weight of such segments can be drastically reduced. Probability theory is used to estimate the magnitude of such weight reductions as the number of segments is increased. An analysis of relevant parameter values required for minimum mass segmented radiators is also included.
Bone microarchitecture of the tibial plateau in skeletal health and osteoporosis.
Krause, Matthias; Hubert, Jan; Deymann, Simon; Hapfelmeier, Alexander; Wulff, Birgit; Petersik, Andreas; Püschel, Klaus; Amling, Michael; Hawellek, Thelonius; Frosch, Karl-Heinz
2018-05-07
Impaired bone structure poses a challenge for the treatment of osteoporotic tibial plateau fractures. As knowledge of region-specific structural bone alterations is a prerequisite to achieving successful long-term fixation, the aim of the current study was to characterize tibial plateau bone structure in patients with osteoporosis and the elderly. Histomorphometric parameters were assessed by high-resolution peripheral quantitative computed tomography (HR-pQCT) in 21 proximal tibiae from females with postmenopausal osteoporosis (mean age: 84.3 ± 4.9 years) and eight female healthy controls (45.5 ± 6.9 years). To visualize region-specific structural bony alterations with age, the bone mineral density (Hounsfield units) was additionally analyzed in 168 human proximal tibiae. Statistical analysis was based on evolutionary learning using globally optimal regression trees. Bone structure deterioration of the tibial plateau due to osteoporosis was region-specific. Compared to healthy controls (20.5 ± 4.7%) the greatest decrease in bone volume fraction was found in the medio-medial segments (9.2 ± 3.5%, p < 0.001). The lowest bone volume was found in central segments (tibial spine). Trabecular connectivity was severely reduced. Importantly, in the anterior and posterior 25% of the lateral and medial tibial plateaux, trabecular support and subchondral cortical bone thickness itself were also reduced. Thinning of subchondral cortical bone and marked bone loss in the anterior and posterior 25% of the tibial plateau should require special attention when osteoporotic patients require fracture fixation of the posterior segments. This knowledge may help to improve the long-term, fracture-specific fixation of complex tibial plateau fractures in osteoporosis. Copyright © 2018 Elsevier B.V. All rights reserved.
King, Katherine E; Clarke, Philippa J
2015-01-01
Urban form-the structure of the built environment-can influence physical activity, yet little is known about how walkable design differs according to neighborhood sociodemographic composition. We studied how walkable urban form varies by neighborhood sociodemographic composition, region, and urbanicity across the United States. Using linear regression models and 2000-2001 US Census data, we investigated the relationship between 5 neighborhood census characteristics (income, education, racial/ethnic composition, age distribution, and sex) and 5 walkability indicators in almost 65,000 census tracts in 48 states and the District of Columbia. Data on the built environment were obtained from the RAND Corporation's (Santa Monica, California) Center for Population Health and Health Disparities (median block length, street segment, and node density) and the US Geological Survey's National Land Cover Database (proportion open space and proportion highly developed). Disadvantaged neighborhoods and those with more educated residents were more walkable (i.e., shorter block length, greater street node density, more developed land use, and higher density of street segments). However, tracts with a higher proportion of children and older adults were less walkable (fewer street nodes and lower density of street segments), after adjustment for region and level of urbanicity. Research and policy on the walkability-health link should give nuanced attention to the gap between persons living in walkable areas and those for whom walkability has the most to offer. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Higa, Claudio Cesar; Novo, Fedor Anton; Nogues, Ignacio; Ciambrone, Maria Graciana; Donato, Maria Sol; Gambarte, Maria Jimena; Rizzo, Natalia; Catalano, Maria Paula; Korolov, Eugenio; Comignani, Pablo Dino
2016-01-01
Microalbuminuria is a known risk factor for cardiovascular morbidity and mortality suggesting that it should be a marker of endothelial dysfunction. Albumin to creatinine ratio (ACR) is an available and rapid test for microalbuminuria determination, with a high correlation with the 24-h urine collection method. There is no prospective study that evaluates the prognostic value of ACR in patients with non ST-segment elevation acute coronary syndromes (NSTE-ACS). The purpose of our study was to detect the long-term prognostic value of ACR in patients with NSTE-ACS. Albumin to creatinine ratio was estimated in 700 patients with NSTE-ACS at admission. Median follow-up time was 18 months. The best cutoff point of ACR for death or acute myocardial infarction was 20 mg/g. Twenty-two percent of patients had elevated ACR. By multivariable Cox regression analysis, ACR was an independent predictor of the clinical endpoint: odds ratio 5.8 (95% confidence interval [CI] 2-16), log-rank 2 p < 0.0001 in a model including age > 65 years, female gender, diabetes mellitus, creatinine clearance, glucose levels at admission, elevated cardiac markers (troponin T/CK-MB) and ST segment depression. The addition of ACR significantly improved GRACE score C-statistics from 0.69 (95% CI 0.59-0.83) to 0.77 (95% CI 0.65-0.88), SE 0.04, 2 p = 0.03, with a good calibration with both models. Albumin to creatinine ratio is an independent and accessible predictor of long-term adverse outcomes in NSTE-ACS, providing additional value for risk stratification.
Glaveckaite, Sigita; Uzdavinyte-Gateliene, Egle; Petrulioniene, Zaneta; Palionis, Darius; Valeviciene, Nomeda; Kalinauskas, Gintaras; Serpytis, Pranas; Laucevicius, Aleksandras
2018-03-09
We aimed to evaluate (i) the effectiveness of combined surgery (coronary artery bypass grafting with restrictive mitral valve annuloplasty) and (ii) the late gadolinium enhancement cardiovascular magnetic resonance-based predictors of ischaemic mitral regurgitation (IMR) recurrence. The prospective analysis included 40 patients with multivessel coronary artery disease, IMR >II° and left ventricular (LV) dysfunction undergoing combined surgery. The degree of IMR and LV parameters were assessed preoperatively by transthoracic echocardiography, 3D transoesophageal echocardiography and cardiovascular magnetic resonance and postoperatively by transthoracic echocardiography. The effective mitral valve repair group (n = 30) was defined as having recurrent ischaemic mitral regurgitation (RIMR) ≤II° at the end of follow-up (25 ± 11 months). The surgery was effective: freedom from RIMR >II° at 1 and 2 years after surgery was 80% and 75%, respectively. Using multivariable logistic regression, 2 independent predictors of RIMR >II° were identified: ≥3 non-viable LV segments (odds ratio 22, P = 0.027) and ≥1 non-viable segment in the LV posterior wall (odds ratio 11, P = 0.026). Using classification trees, the best combinations of cardiovascular magnetic resonance-based and 3D transoesophageal echocardiography-based predictors for RIMR >II° were (i) posterior mitral valve leaflet angle >40° and LV end-systolic volume index >45 ml/m2 (sensitivity 100%, specificity 89%) and (ii) scar transmurality >68% in the inferior LV wall and EuroSCORE II >8 (sensitivity 83%, specificity 78%). There is a clear relationship between the amount of non-viable LV segments, especially in the LV posterior and inferior walls, and the recurrence of IMR after the combined surgery.
Characterizing protein conformations by correlation analysis of coarse-grained contact matrices.
Lindsay, Richard J; Siess, Jan; Lohry, David P; McGee, Trevor S; Ritchie, Jordan S; Johnson, Quentin R; Shen, Tongye
2018-01-14
We have developed a method to capture the essential conformational dynamics of folded biopolymers using statistical analysis of coarse-grained segment-segment contacts. Previously, the residue-residue contact analysis of simulation trajectories was successfully applied to the detection of conformational switching motions in biomolecular complexes. However, the application to large protein systems (larger than 1000 amino acid residues) is challenging using the description of residue contacts. Also, the residue-based method cannot be used to compare proteins with different sequences. To expand the scope of the method, we have tested several coarse-graining schemes that group a collection of consecutive residues into a segment. The definition of these segments may be derived from structural and sequence information, while the interaction strength of the coarse-grained segment-segment contacts is a function of the residue-residue contacts. We then perform covariance calculations on these coarse-grained contact matrices. We monitored how well the principal components of the contact matrices is preserved using various rendering functions. The new method was demonstrated to assist the reduction of the degrees of freedom for describing the conformation space, and it potentially allows for the analysis of a system that is approximately tenfold larger compared with the corresponding residue contact-based method. This method can also render a family of similar proteins into the same conformational space, and thus can be used to compare the structures of proteins with different sequences.
Characterizing protein conformations by correlation analysis of coarse-grained contact matrices
NASA Astrophysics Data System (ADS)
Lindsay, Richard J.; Siess, Jan; Lohry, David P.; McGee, Trevor S.; Ritchie, Jordan S.; Johnson, Quentin R.; Shen, Tongye
2018-01-01
We have developed a method to capture the essential conformational dynamics of folded biopolymers using statistical analysis of coarse-grained segment-segment contacts. Previously, the residue-residue contact analysis of simulation trajectories was successfully applied to the detection of conformational switching motions in biomolecular complexes. However, the application to large protein systems (larger than 1000 amino acid residues) is challenging using the description of residue contacts. Also, the residue-based method cannot be used to compare proteins with different sequences. To expand the scope of the method, we have tested several coarse-graining schemes that group a collection of consecutive residues into a segment. The definition of these segments may be derived from structural and sequence information, while the interaction strength of the coarse-grained segment-segment contacts is a function of the residue-residue contacts. We then perform covariance calculations on these coarse-grained contact matrices. We monitored how well the principal components of the contact matrices is preserved using various rendering functions. The new method was demonstrated to assist the reduction of the degrees of freedom for describing the conformation space, and it potentially allows for the analysis of a system that is approximately tenfold larger compared with the corresponding residue contact-based method. This method can also render a family of similar proteins into the same conformational space, and thus can be used to compare the structures of proteins with different sequences.
Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly
2013-01-01
High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652
Genetic Diversity of Crimean Congo Hemorrhagic Fever Virus Strains from Iran
Chinikar, Sadegh; Bouzari, Saeid; Shokrgozar, Mohammad Ali; Mostafavi, Ehsan; Jalali, Tahmineh; Khakifirouz, Sahar; Nowotny, Norbert; Fooks, Anthony R.; Shah-Hosseini, Nariman
2016-01-01
Background: Crimean Congo hemorrhagic fever virus (CCHFV) is a member of the Bunyaviridae family and Nairovirus genus. It has a negative-sense, single stranded RNA genome approximately 19.2 kb, containing the Small, Medium, and Large segments. CCHFVs are relatively divergent in their genome sequence and grouped in seven distinct clades based on S-segment sequence analysis and six clades based on M-segment sequences. Our aim was to obtain new insights into the molecular epidemiology of CCHFV in Iran. Methods: We analyzed partial and complete nucleotide sequences of the S and M segments derived from 50 Iranian patients. The extracted RNA was amplified using one-step RT-PCR and then sequenced. The sequences were analyzed using Mega5 software. Results: Phylogenetic analysis of partial S segment sequences demonstrated that clade IV-(Asia 1), clade IV-(Asia 2) and clade V-(Europe) accounted for 80 %, 4 % and 14 % of the circulating genomic variants of CCHFV in Iran respectively. However, one of the Iranian strains (Iran-Kerman/22) was associated with none of other sequences and formed a new clade (VII). The phylogenetic analysis of complete S-segment nucleotide sequences from selected Iranian CCHFV strains complemented with representative strains from GenBank revealed similar topology as partial sequences with eight major clusters. A partial M segment phylogeny positioned the Iranian strains in either association with clade III (Asia-Africa) or clade V (Europe). Conclusion: The phylogenetic analysis revealed subtle links between distant geographic locations, which we propose might originate either from international livestock trade or from long-distance carriage of CCHFV by infected ticks via bird migration. PMID:27308271
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
Segmenting patients and physicians using preferences from discrete choice experiments.
Deal, Ken
2014-01-01
People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university students. Those segments were very different-where one wanted substantial penalties against cyberbullies and were willing to devote time to a prevention program, while the other felt no need to be involved in prevention and wanted only minor penalties. Segmentation recognizes key differences in why patients and physicians prefer different health programs and treatments. A viable segmentation solution may lead to adapting prevention programs and treatments for each targeted segment and/or to educating and communicating to better inform those in each segment of the program/treatment benefits. Segment members' revealed preferences showing behavioral changes provide the ultimate basis for evaluating the segmentation benefits to the health organization.
Metric Learning to Enhance Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.
2013-01-01
Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.
Zhou, Yong; Dong, Guichun; Tao, Yajun; Chen, Chen; Yang, Bin; Wu, Yue; Yang, Zefeng; Liang, Guohua; Wang, Baohe; Wang, Yulong
2016-01-01
Identification of quantitative trait loci (QTLs) associated with rice root morphology provides useful information for avoiding drought stress and maintaining yield production under the irrigation condition. In this study, a set of chromosome segment substitution lines derived from 9311 as the recipient and Nipponbare as donor, were used to analysis root morphology. By combining the resequencing-based bin-map with a multiple linear regression analysis, QTL identification was conducted on root number (RN), total root length (TRL), root dry weight (RDW), maximum root length (MRL), root thickness (RTH), total absorption area (TAA) and root vitality (RV), using the CSSL population grown under hydroponic conditions. A total of thirty-eight QTLs were identified: six for TRL, six for RDW, eight for the MRL, four for RTH, seven for RN, two for TAA, and five for RV. Phenotypic effect variance explained by these QTLs ranged from 2.23% to 37.08%, and four single QTLs had more than 10% phenotypic explanations on three root traits. We also detected the correlations between grain yield (GY) and root traits, and found that TRL, RTH and MRL had significantly positive correlations with GY. However, TRL, RDW and MRL had significantly positive correlations with biomass yield (BY). Several QTLs identified in our population were co-localized with some loci for grain yield or biomass. This information may be immediately exploited for improving rice water and fertilizer use efficiency for molecular breeding of root system architectures.
Minami, Keiichiro; Miyata, Kazunori; Otani, Atsushi; Tokunaga, Tadatoshi; Tokuda, Shouta; Amano, Shiro
2018-05-01
To determine steep increase of corneal irregularity induced by advancement of pterygium. A total of 456 eyes from 456 consecutive patients with primary pterygia were examined for corneal topography and advancement of pterygium with respect to the corneal diameter. Corneal irregularity induced by the pterygium advancement was evaluated by Fourier harmonic analyses of the topographic data that were modified for a series of analysis diameters from 1 mm to 6 mm. Incidences of steep increases in the asymmetry or higher-order irregularity components (inflection points) were determined by using segmented regression analysis for each analysis diameter. The pterygium advancement ranged from 2% to 57%, with a mean of 22.0%. Both components showed steep increases from the inflection points. The inflection points in the higher-order irregularity component altered with the analysis diameter (14.0%-30.6%), while there was no alternation in the asymmetry components (35.5%-36.8%). For the former component, the values at the inflection points were obtained in a range of 0.16 to 0.25 D. The Fourier harmonic analyses for a series of analysis diameters revealed that the higher-order irregularity component increased with the pterygium advancement. The analysis results confirmed the precedence of corneal irregularity due to pterygium advancement.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
Musical structure analysis using similarity matrix and dynamic programming
NASA Astrophysics Data System (ADS)
Shiu, Yu; Jeong, Hong; Kuo, C.-C. Jay
2005-10-01
Automatic music segmentation and structure analysis from audio waveforms based on a three-level hierarchy is examined in this research, where the three-level hierarchy includes notes, measures and parts. The pitch class profile (PCP) feature is first extracted at the note level. Then, a similarity matrix is constructed at the measure level, where a dynamic time warping (DTW) technique is used to enhance the similarity computation by taking the temporal distortion of similar audio segments into account. By processing the similarity matrix, we can obtain a coarse-grain music segmentation result. Finally, dynamic programming is applied to the coarse-grain segments so that a song can be decomposed into several major parts such as intro, verse, chorus, bridge and outro. The performance of the proposed music structure analysis system is demonstrated for pop and rock music.
Moyer, Douglas; Bennett, Mark
2007-01-01
The U.S. Geological Survey (USGS), U.S. Environmental Protection Agency (USEPA), Chesapeake Bay Program (CBP), Interstate Commission for the Potomac River Basin (ICPRB), Maryland Department of the Environment (MDE), Virginia Department of Conservation and Recreation (VADCR), and University of Maryland (UMD) are collaborating to improve the resolution of the Chesapeake Bay Regional Watershed Model (CBRWM). This watershed model uses the Hydrologic Simulation Program-Fortran (HSPF) to simulate the fate and transport of nutrients and sediment throughout the Chesapeake Bay watershed and extended areas of Virginia, Maryland, and Delaware. Information from the CBRWM is used by the CBP and other watershed managers to assess the effectiveness of water-quality improvement efforts as well as guide future management activities. A critical step in the improvement of the CBRWM framework was the development of an HSPF function table (FTABLE) for each represented stream channel. The FTABLE is used to relate stage (water depth) in a particular stream channel to associated channel surface area, channel volume, and discharge (streamflow). The primary tool used to generate an FTABLE for each stream channel is the XSECT program, a computer program that requires nine input variables used to represent channel morphology. These input variables are reach length, upstream and downstream elevation, channel bottom width, channel bankfull width, channel bankfull stage, slope of the floodplain, and Manning's roughness coefficient for the channel and floodplain. For the purpose of this study, the nine input variables were grouped into three categories: channel geometry, Manning's roughness coefficient, and channel and floodplain slope. Values of channel geometry for every stream segment represented in CBRWM were obtained by first developing regional regression models that relate basin drainage area to observed values of bankfull width, bankfull depth, and bottom width at each of the 290 USGS streamflow-gaging stations included in the areal extent of the model. These regression models were developed on the basis of data from stations in four physiographic provinces (Appalachian Plateaus, Valley and Ridge, Piedmont, and Coastal Plain) and were used to predict channel geometry for all 738 stream segments in the modeled area from associated basin drainage area. Manning's roughness coefficient for the channel and floodplain was represented in the XSECT program in two forms. First, all available field-estimated values of roughness were compiled for gaging stations in each physiographic province. The median of field-estimated values of channel and floodplain roughness for each physiographic province was applied to all respective stream segments. The second representation of Manning's roughness coefficient was to allow roughness to vary with channel depth. Roughness was estimated at each gaging station for each 1-foot depth interval. Median values of roughness were calculated for each 1-foot depth interval for all stations in each physiographic province. Channel and floodplain slope were determined for every stream segment in CBRWM using the USGS National Elevation Dataset. Function tables were generated by the XSECT program using values of channel geometry, channel and floodplain roughness, and channel and floodplain slope. The FTABLEs for each of the 290 USGS streamflow-gaging stations were evaluated by comparing observed discharge to the XSECT-derived discharge. Function table stream discharge derived using depth-varying roughness was found to be more representative of and statistically indistinguishable from values of observed stream discharge. Additionally, results of regression analysis showed that XSECT-derived discharge accounted for approximately 90 percent of the variability associated with observed discharge in each of the four physiographic provinces. The results of this study indicate that the methodology developed to generate FTABLEs for every s
Microscopy image segmentation tool: Robust image data analysis
NASA Astrophysics Data System (ADS)
Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.
2014-03-01
We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.
Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn
2011-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.
Automated segmentation of pulmonary structures in thoracic computed tomography scans: a review
NASA Astrophysics Data System (ADS)
van Rikxoort, Eva M.; van Ginneken, Bram
2013-09-01
Computed tomography (CT) is the modality of choice for imaging the lungs in vivo. Sub-millimeter isotropic images of the lungs can be obtained within seconds, allowing the detection of small lesions and detailed analysis of disease processes. The high resolution of thoracic CT and the high prevalence of lung diseases require a high degree of automation in the analysis pipeline. The automated segmentation of pulmonary structures in thoracic CT has been an important research topic for over a decade now. This systematic review provides an overview of current literature. We discuss segmentation methods for the lungs, the pulmonary vasculature, the airways, including airway tree construction and airway wall segmentation, the fissures, the lobes and the pulmonary segments. For each topic, the current state of the art is summarized, and topics for future research are identified.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Pre-operative segmentation of neck CT datasets for the planning of neck dissections
NASA Astrophysics Data System (ADS)
Cordes, Jeanette; Dornheim, Jana; Preim, Bernhard; Hertel, Ilka; Strauss, Gero
2006-03-01
For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.
Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Noh, Kyoung Jin; Shim, Hackjoon; Seol, Hae Young
2017-05-01
We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ c ) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP.
Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs
NASA Astrophysics Data System (ADS)
De Vylder, Jonas; Philips, Wilfried
2011-02-01
This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.
Consumer preferences for general practitioner services.
Morrison, Mark; Murphy, Tom; Nalder, Craig
2003-01-01
This study focuses on segmenting the market for General Practitioner services in a regional setting. Using factor analysis, five main service attributes are identified. These are clear communication, ongoing doctor-patient relationship, same gender as the patient, provides advice to the patient, and empowers the patient to make his/her own decisions. These service attributes are used as a basis for market segmentation, using both socio-demographic variables and cluster analysis. Four distinct market segments are identified, with varying degrees of viability in terms of target marketing.
NASA Astrophysics Data System (ADS)
Buckner, Steven A.
The Helicopter Emergency Medical Service (HEMS) industry has a significant role in the transportation of injured patients, but has experienced more accidents than all other segments of the aviation industry combined. With the objective of addressing this discrepancy, this study assesses the effect of safety management systems implementation and aviation technologies utilization on the reduction of HEMS accident rates. Participating were 147 pilots from Federal Aviation Regulations Part 135 HEMS operators, who completed a survey questionnaire based on the Safety Culture and Safety Management System Survey (SCSMSS). The study assessed the predictor value of SMS implementation and aviation technologies to the frequency of HEMS accident rates with correlation and multiple linear regression. The correlation analysis identified three significant positive relationships. HEMS years of experience had a high significant positive relationship with accident rate (r=.90; p<.05); SMS had a moderate significant positive relationship to Night Vision Goggles (NVG) (r=.38; p<.05); and SMS had a slight significant positive relationship with Terrain Avoidance Warning System (TAWS) (r=.234; p<.05). Multiple regression analysis suggested that when combined with NVG, TAWS, and SMS, HEMS years of experience explained 81.4% of the variance in accident rate scores (p<.05), and HEMS years of experience was found to be a significant predictor of accident rates (p<.05). Additional quantitative regression analysis was recommended to replicate the results of this study and to consider the influence of these variables for continued reduction of HEMS accidents, and to induce execution of SMS and aviation technologies from a systems engineering application. Recommendations for practice included the adoption of existing regulatory guidance for a SMS program. A qualitative analysis was also recommended for future study SMS implementation and HEMS accident rate from the pilot's perspective. A quantitative longitudinal study would further explore inferential relationships between the study variables. Current strategies should include the increased utilization of available aviation technology resources as this proactive stance may be beneficial for the establishment of an effective safety culture within the HEMS industry.
Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory
2004-01-01
Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.
Analysis and testing of a soft actuation system for segmented reflector articulation and isolation
NASA Technical Reports Server (NTRS)
Jandura, Louise; Agronin, Michael L.
1991-01-01
Segmented reflectors have been proposed for space-based applications such as optical communication and large-diameter telescopes. An actuation system for mirrors in a space-based segmented mirror array has been developed as part of the National Aeronautics and Space Administration-sponsored Precision Segmented Reflector program. The actuation system, called the Articulated Panel Module (APM), articulates a mirror panel in 3 degrees of freedom in the submicron regime, isolates the panel from structural motion, and simplifies space assembly of the mirrors to the reflector backup truss. A breadboard of the APM has been built and is described. Three-axis modeling, analysis, and testing of the breadboard is discussed.
A modal comparison of domestic freight transportation effects on the general public
DOT National Transportation Integrated Search
2007-12-01
Initially, this study was designed to focus on certain segments of the IWWS. However, for : certain types of analyses, it is not feasible to segregate components of the system, i.e., river segments, rail segments, etc. In these cases, the analysis is...
Market segmentation using perceived constraints
Jinhee Jun; Gerard Kyle; Andrew Mowen
2008-01-01
We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Haufe, William M; Wolfson, Tanya; Hooker, Catherine A; Hooker, Jonathan C; Covarrubias, Yesenia; Schlein, Alex N; Hamilton, Gavin; Middleton, Michael S; Angeles, Jorge E; Hernando, Diego; Reeder, Scott B; Schwimmer, Jeffrey B; Sirlin, Claude B
2017-12-01
To assess and compare the accuracy of magnitude-based magnetic resonance imaging (MRI-M) and complex-based MRI (MRI-C) for estimating hepatic proton density fat fraction (PDFF) in children, using MR spectroscopy (MRS) as the reference standard. A secondary aim was to assess the agreement between MRI-M and MRI-C. This was a HIPAA-compliant, retrospective analysis of data collected in children enrolled in prospective, Institutional Review Board (IRB)-approved studies between 2012 and 2014. Informed consent was obtained from 200 children (ages 8-19 years) who subsequently underwent 3T MR exams that included MRI-M, MRI-C, and T 1 -independent, T 2 -corrected, single-voxel stimulated echo acquisition mode (STEAM) MRS. Both MRI methods acquired six echoes at low flip angles. T2*-corrected PDFF parametric maps were generated. PDFF values were recorded from regions of interest (ROIs) drawn on the maps in each of the nine Couinaud segments and three ROIs colocalized to the MRS voxel location. Regression analyses assessing agreement with MRS were performed to evaluate the accuracy of each MRI method, and Bland-Altman and intraclass correlation coefficient (ICC) analyses were performed to assess agreement between the MRI methods. MRI-M and MRI-C PDFF were accurate relative to the colocalized MRS reference standard, with regression intercepts of 0.63% and -0.07%, slopes of 0.998 and 0.975, and proportion-of-explained-variance values (R 2 ) of 0.982 and 0.979, respectively. For individual Couinaud segments and for the whole liver averages, Bland-Altman biases between MRI-M and MRI-C were small (ranging from 0.04 to 1.11%) and ICCs were high (≥0.978). Both MRI-M and MRI-C accurately estimated hepatic PDFF in children, and high intermethod agreement was observed. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1641-1647. © 2017 International Society for Magnetic Resonance in Medicine.
Self, Wesley H.; Speroff, Theodore; Grijalva, Carlos G.; McNaughton, Candace D.; Ashburn, Jacki; Liu, Dandan; Arbogast, Patrick G.; Russ, Stephan; Storrow, Alan B.; Talbot, Thomas R.
2012-01-01
Objectives Blood culture contamination is a common problem in the emergency department (ED) that leads to unnecessary patient morbidity and health care costs. The study objective was to develop and evaluate the effectiveness of a quality improvement (QI) intervention for reducing blood culture contamination in an ED. Methods The authors developed a QI intervention to reduce blood culture contamination in the ED and then evaluated its effectiveness in a prospective interrupted times series study. The QI intervention involved changing the technique of blood culture specimen collection from the traditional clean procedure, to a new sterile procedure, with standardized use of sterile gloves and a new materials kit containing a 2% chlorhexidine skin antisepsis device, a sterile fenestrated drape, a sterile needle, and a procedural checklist. The intervention was implemented in a university-affiliated ED and its effect on blood culture contamination evaluated by comparing the biweekly percentages of blood cultures contaminated during a 48-week baseline period (clean technique), and 48-week intervention period (sterile technique), using segmented regression analysis with adjustment for secular trends and first-order autocorrelation. The goal was to achieve and maintain a contamination rate below 3%. Results During the baseline period, 321 out of 7,389 (4.3%) cultures were contaminated, compared to 111 of 6,590 (1.7%) during the intervention period (p < 0.001). In the segmented regression model, the intervention was associated with an immediate 2.9% (95% CI = 2.2% to 3.2%) absolute reduction in contamination. The contamination rate was maintained below 3% during each biweekly interval throughout the intervention period. Conclusions A QI assessment of ED blood culture contamination led to development of a targeted intervention to convert the process of blood culture collection from a clean to a fully sterile procedure. Implementation of this intervention led to an immediate and sustained reduction of contamination in an ED with a high baseline contamination rate. PMID:23570482
Human body surface area: measurement and prediction using three dimensional body scans.
Tikuisis, P; Meunier, P; Jubenville, C E
2001-08-01
The development of three dimensional laser scanning technology and sophisticated graphics editing software have allowed an alternative and potentially more accurate determination of body surface area (BSA). Raw whole-body scans of 641 adults (395 men and 246 women) were obtained from the anthropometric data base of the Civilian American and European Surface Anthropometry Resource project. Following surface restoration of the scans (i.e. patching and smoothing), BSA was calculated. A representative subset of the entire sample population involving 12 men and 12 women (G24) was selected for detailed measurements of hand surface area (SAhand) and ratios of surface area to volume (SA/VOL) of various body segments. Regression equations involving wrist circumference and arm length were used to predict SAhand of the remaining population. The overall [mean (SD)] of BSA were 2.03 (0.19) and 1.73 (0.19) m2 for men and women, respectively. Various prediction equations were tested and although most predicted the measured BSA reasonably closely, residual analysis revealed an overprediction with increasing body size in most cases. Separate non-linear regressions for each sex yielded the following best-fit equations (with root mean square errors of about 1.3%): BSA (cm2) = 128.1 x m0.44 x h0.60 for men and BSA = 147.4 x m0.47 x h0.55 for women, where m, body mass, is in kilograms and h, height, is in centimetres. The SA/VOL ratios of the various body segments were higher for the women compared to the men of G24, significantly for the head plus neck (by 7%), torso (19%), upper arms (15%), forearms (20%), hands (18%), and feet (11%). The SA/VOL for both sexes ranged from approximately 12.m-1 for the pelvic region to 104-123.m-1 for the hands, and shape differences were a factor for the torso and lower leg.
On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.
Kim, Woosuk; Kim, Myunggyu
2018-03-19
In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.
Samuels, David C.; Boys, Richard J.; Henderson, Daniel A.; Chinnery, Patrick F.
2003-01-01
We applied a hidden Markov model segmentation method to the human mitochondrial genome to identify patterns in the sequence, to compare these patterns to the gene structure of mtDNA and to see whether these patterns reveal additional characteristics important for our understanding of genome evolution, structure and function. Our analysis identified three segmentation categories based upon the sequence transition probabilities. Category 2 segments corresponded to the tRNA and rRNA genes, with a greater strand-symmetry in these segments. Category 1 and 3 segments covered the protein- coding genes and almost all of the non-coding D-loop. Compared to category 1, the mtDNA segments assigned to category 3 had much lower guanine abundance. A comparison to two independent databases of mitochondrial mutations and polymorphisms showed that the high substitution rate of guanine in human mtDNA is largest in the category 3 segments. Analysis of synonymous mutations showed the same pattern. This suggests that this heterogeneity in the mutation rate is partly independent of respiratory chain function and is a direct property of the genome sequence itself. This has important implications for our understanding of mtDNA evolution and its use as a ‘molecular clock’ to determine the rate of population and species divergence. PMID:14530452