A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs
NASA Astrophysics Data System (ADS)
Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.
1997-04-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.
Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.
Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L
2012-04-01
Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. Copyright © 2012 Elsevier B.V. All rights reserved.
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano
2018-04-11
Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.
Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory
2004-01-01
Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
SE Great Basin Play Fairway Analysis
Adam Brandt
2015-11-15
This submission includes a Na/K geothermometer probability greater than 200 deg C map, as well as two play fairway analysis (PFA) models. The probability map acts as a composite risk segment for the PFA models. The PFA models differ in their application of magnetotelluric conductors as composite risk segments. These PFA models map out the geothermal potential in the region of SE Great Basin, Utah.
Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi
2015-01-01
Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674
Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias
2015-01-01
Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.
Analysis of Regional Effects on Market Segment Production
2016-06-01
REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION by James D. Moffitt June 2016 Thesis Advisor: Lyn R. Whitaker Co-Advisor: Jonathan K. Alt...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ANALYSIS OF REGIONAL EFFECTS ON MARKET SEGMENT PRODUCTION 5. FUNDING NUMBERS 6...accessions in Potential Rating Index Zip Code Market New Evolution (PRIZM NE) market segments. This model will aid USAREC G2 analysts involved in
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using Predictability for Lexical Segmentation.
Çöltekin, Çağrı
2017-09-01
This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.
Stochastic modeling of soundtrack for efficient segmentation and indexing of video
NASA Astrophysics Data System (ADS)
Naphade, Milind R.; Huang, Thomas S.
1999-12-01
Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.
Analysis and design of segment control system in segmented primary mirror
NASA Astrophysics Data System (ADS)
Yu, Wenhao; Li, Bin; Chen, Mo; Xian, Hao
2017-10-01
Segmented primary mirror will be adopted widely in giant telescopes in future, such as TMT, E-ELT and GMT. High-performance control technology of the segmented primary mirror is one of the difficult technologies for telescopes using segmented primary mirror. The control of each segment is the basis of control system in segmented mirror. Correcting the tilt and tip of single segment is the main work of this paper which is divided into two parts. Firstly, harmonic response done in finite element model of single segment matches the Bode diagram of a two-order system whose natural frequency is 45 hertz and damping ratio is 0.005. Secondly, a control system model is established, and speed feedback is introduced in control loop to suppress resonance point gain and increase the open-loop bandwidth, up to 30Hz or even higher. Corresponding controller is designed based on the control system model described above.
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
NASA Astrophysics Data System (ADS)
Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill
2012-06-01
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
The ASAC Flight Segment and Network Cost Models
NASA Technical Reports Server (NTRS)
Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.
1997-01-01
To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.
Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki
2017-02-01
This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.
Multi-object segmentation framework using deformable models for medical imaging analysis.
Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel
2016-08-01
Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.
van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna
2012-03-01
Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
van 't Klooster, Ronald; de Koning, Patrick J H; Dehnavi, Reza Alizadeh; Tamsma, Jouke T; de Roos, Albert; Reiber, Johan H C; van der Geest, Rob J
2012-01-01
To develop and validate an automated segmentation technique for the detection of the lumen and outer wall boundaries in MR vessel wall studies of the common carotid artery. A new segmentation method was developed using a three-dimensional (3D) deformable vessel model requiring only one single user interaction by combining 3D MR angiography (MRA) and 2D vessel wall images. This vessel model is a 3D cylindrical Non-Uniform Rational B-Spline (NURBS) surface which can be deformed to fit the underlying image data. Image data of 45 subjects was used to validate the method by comparing manual and automatic segmentations. Vessel wall thickness and volume measurements obtained by both methods were compared. Substantial agreement was observed between manual and automatic segmentation; over 85% of the vessel wall contours were segmented successfully. The interclass correlation was 0.690 for the vessel wall thickness and 0.793 for the vessel wall volume. Compared with manual image analysis, the automated method demonstrated improved interobserver agreement and inter-scan reproducibility. Additionally, the proposed automated image analysis approach was substantially faster. This new automated method can reduce analysis time and enhance reproducibility of the quantification of vessel wall dimensions in clinical studies. Copyright © 2011 Wiley Periodicals, Inc.
Knowledge-based segmentation and feature analysis of hand and wrist radiographs
NASA Astrophysics Data System (ADS)
Efford, Nicholas D.
1993-07-01
The segmentation of hand and wrist radiographs for applications such as skeletal maturity assessment is best achieved by model-driven approaches incorporating anatomical knowledge. The reasons for this are discussed, and a particular frame-based or 'blackboard' strategy for the simultaneous segmentation of the hand and estimation of bone age via the TW2 method is described. The new approach is structured for optimum robustness and computational efficiency: features of interest are detected and analyzes in order of their size and prominence in the image, the largest and most distinctive being dealt with first, and the evidence generated by feature analysis is used to update a model of hand anatomy and hence guide later stages of the segmentation. Closed bone boundaries are formed by a hybrid technique combining knowledge-based, one-dimensional edge detection with model-assisted heuristic tree searching.
Integrated modeling analysis of a novel hexapod and its application in active surface
NASA Astrophysics Data System (ADS)
Yang, Dehua; Zago, Lorenzo; Li, Hui; Lambert, Gregory; Zhou, Guohua; Li, Guoping
2011-09-01
This paper presents the concept and integrated modeling analysis of a novel mechanism, a 3-CPS/RPPS hexapod, for supporting segmented reflectors for radio telescopes and eventually segmented mirrors of optical telescopes. The concept comprises a novel type of hexapod with an original organization of actuators hence degrees of freedom, based on a swaying arm based design concept. Afterwards, with specially designed connecting joints between panels/segments, an iso-static master-slave active surface concept can be achieved for any triangular and/or hexagonal panel/segment pattern. The integrated modeling comprises all the multifold sizing and performance aspects which must be evaluated concurrently in order to optimize and validate the design and the configuration. In particular, comprehensive investigation of kinematic behavior, dynamic analysis, wave-front error and sensitivity analysis are carried out, where, frequently used tools like MATLAB/SimMechanics, CALFEM and ANSYS are used. Especially, we introduce the finite element method as a competent approach for analyses of the multi-degree of freedom mechanism. Some experimental verifications already performed validating single aspects of the integrated concept are also presented with the results obtained.
An accurate real-time model of maglev planar motor based on compound Simpson numerical integration
NASA Astrophysics Data System (ADS)
Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi
2017-05-01
To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.
Segmentation of radiographic images under topological constraints: application to the femur.
Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang
2010-09-01
A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.
Eckert, Paulo Roberto; Goltz, Evandro Claiton; Filho, Aly Ferreira Flores
2014-01-01
This work analyses the effects of segmentation followed by parallel magnetization of ring-shaped NdFeB permanent magnets used in slotless cylindrical linear actuators. The main purpose of the work is to evaluate the effects of that segmentation on the performance of the actuator and to present a general overview of the influence of parallel magnetization by varying the number of segments and comparing the results with ideal radially magnetized rings. The analysis is first performed by modelling mathematically the radial and circumferential components of magnetization for both radial and parallel magnetizations, followed by an analysis carried out by means of the 3D finite element method. Results obtained from the models are validated by measuring radial and tangential components of magnetic flux distribution in the air gap on a prototype which employs magnet rings with eight segments each with parallel magnetization. The axial force produced by the actuator was also measured and compared with the results obtained from numerical models. Although this analysis focused on a specific topology of cylindrical actuator, the observed effects on the topology could be extended to others in which surface-mounted permanent magnets are employed, including rotating electrical machines. PMID:25051032
Eckert, Paulo Roberto; Goltz, Evandro Claiton; Flores Filho, Aly Ferreira
2014-07-21
This work analyses the effects of segmentation followed by parallel magnetization of ring-shaped NdFeB permanent magnets used in slotless cylindrical linear actuators. The main purpose of the work is to evaluate the effects of that segmentation on the performance of the actuator and to present a general overview of the influence of parallel magnetization by varying the number of segments and comparing the results with ideal radially magnetized rings. The analysis is first performed by modelling mathematically the radial and circumferential components of magnetization for both radial and parallel magnetizations, followed by an analysis carried out by means of the 3D finite element method. Results obtained from the models are validated by measuring radial and tangential components of magnetic flux distribution in the air gap on a prototype which employs magnet rings with eight segments each with parallel magnetization. The axial force produced by the actuator was also measured and compared with the results obtained from numerical models. Although this analysis focused on a specific topology of cylindrical actuator, the observed effects on the topology could be extended to others in which surface-mounted permanent magnets are employed, including rotating electrical machines.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente
2016-05-01
Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data
NASA Astrophysics Data System (ADS)
Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus
The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.
A musculoskeletal foot model for clinical gait analysis.
Saraswat, Prabhav; Andersen, Michael S; Macwilliams, Bruce A
2010-06-18
Several full body musculoskeletal models have been developed for research applications and these models may potentially be developed into useful clinical tools to assess gait pathologies. Existing full-body musculoskeletal models treat the foot as a single segment and ignore the motions of the intrinsic joints of the foot. This assumption limits the use of such models in clinical cases with significant foot deformities. Therefore, a three-segment musculoskeletal model of the foot was developed to match the segmentation of a recently developed multi-segment kinematic foot model. All the muscles and ligaments of the foot spanning the modeled joints were included. Muscle pathways were adjusted with an optimization routine to minimize the difference between the muscle flexion-extension moment arms from the model and moment arms reported in literature. The model was driven by walking data from five normal pediatric subjects (aged 10.6+/-1.57 years) and muscle forces and activation levels required to produce joint motions were calculated using an inverse dynamic analysis approach. Due to the close proximity of markers on the foot, small marker placement error during motion data collection may lead to significant differences in musculoskeletal model outcomes. Therefore, an optimization routine was developed to enforce joint constraints, optimally scale each segment length and adjust marker positions. To evaluate the model outcomes, the muscle activation patterns during walking were compared with electromyography (EMG) activation patterns reported in the literature. Model-generated muscle activation patterns were observed to be similar to the EMG activation patterns. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi
2010-03-01
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Sarment: Python modules for HMM analysis and partitioning of sequences.
Guéguen, Laurent
2005-08-15
Sarment is a package of Python modules for easy building and manipulation of sequence segmentations. It provides efficient implementation of usual algorithms for hidden Markov Model computation, as well as for maximal predictive partitioning. Owing to its very large variety of criteria for computing segmentations, Sarment can handle many kinds of models. Because of object-oriented programming, the results of the segmentation are very easy tomanipulate.
Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.
Fukurai, H
1991-01-01
This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.
Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.
Pehkonen, Petri; Wong, Garry; Törönen, Petri
2010-01-01
Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.
On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.
Kim, Woosuk; Kim, Myunggyu
2018-03-19
In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.
Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.
Würflinger, T; Gamper, I; Aach, T; Sechi, A S
2011-01-01
Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Design and Analysis of an X-Ray Mirror Assembly Using the Meta-Shell Approach
NASA Technical Reports Server (NTRS)
McClelland, Ryan S.; Bonafede, Joseph; Saha, Timo T.; Solly, Peter M.; Zhang, William W.
2016-01-01
Lightweight and high resolution optics are needed for future space-based x-ray telescopes to achieve advances in high-energy astrophysics. Past missions such as Chandra and XMM-Newton have achieved excellent angular resolution using a full shell mirror approach. Other missions such as Suzaku and NuSTAR have achieved lightweight mirrors using a segmented approach. This paper describes a new approach, called meta-shells, which combines the fabrication advantages of segmented optics with the alignment advantages of full shell optics. Meta-shells are built by layering overlapping mirror segments onto a central structural shell. The resulting optic has the stiffness and rotational symmetry of a full shell, but with an order of magnitude greater collecting area. Several meta-shells so constructed can be integrated into a large x-ray mirror assembly by proven methods used for Chandra and XMM-Newton. The mirror segments are mounted to the meta-shell using a novel four point semi-kinematic mount. The four point mount deterministically locates the segment in its most performance sensitive degrees of freedom. Extensive analysis has been performed to demonstrate the feasibility of the four point mount and meta-shell approach. A mathematical model of a meta-shell constructed with mirror segments bonded at four points and subject to launch loads has been developed to determine the optimal design parameters, namely bond size, mirror segment span, and number of layers per meta-shell. The parameters of an example 1.3 m diameter mirror assembly are given including the predicted effective area. To verify the mathematical model and support opto-mechanical analysis, a detailed finite element model of a meta-shell was created. Finite element analysis predicts low gravity distortion and low sensitivity to thermal gradients.
Analysis of a kinetic multi-segment foot model part II: kinetics and clinical implications.
Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L
2012-04-01
Kinematic multi-segment foot models have seen increased use in clinical and research settings, but the addition of kinetics has been limited and hampered by measurement limitations and modeling assumptions. In this second of two companion papers, we complete the presentation and analysis of a three segment kinetic foot model by incorporating kinetic parameters and calculating joint moments and powers. The model was tested on 17 pediatric subjects (ages 7-18 years) during normal gait. Ground reaction forces were measured using two adjacent force platforms, requiring targeted walking and the creation of two sub-models to analyze ankle, midtarsal, and 1st metatarsophalangeal joints. Targeted walking resulted in only minimal kinematic and kinetic differences compared with walking at self selected speeds. Joint moments and powers were calculated and ensemble averages are presented as a normative database for comparison purposes. Ankle joint powers are shown to be overestimated when using a traditional single-segment foot model, as substantial angular velocities are attributed to the mid-tarsal joint. Power transfer is apparent between the 1st metatarsophalangeal and mid-tarsal joints in terminal stance/pre-swing. While the measurement approach presented here is limited to clinical populations with only minimal impairments, some elements of the model can also be incorporated into routine clinical gait analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Dynamical simulation of E-ELT segmented primary mirror
NASA Astrophysics Data System (ADS)
Sedghi, B.; Muller, M.; Bauvir, B.
2011-09-01
The dynamical behavior of the primary mirror (M1) has an important impact on the control of the segments and the performance of the telescope. Control of large segmented mirrors with a large number of actuators and sensors and multiple control loops in real life is a challenging problem. In virtual life, modeling, simulation and analysis of the M1 bears similar difficulties and challenges. In order to capture the dynamics of the segment subunits (high frequency modes) and the telescope back structure (low frequency modes), high order dynamical models with a very large number of inputs and outputs need to be simulated. In this paper, different approaches for dynamical modeling and simulation of the M1 segmented mirror subject to various perturbations, e.g. sensor noise, wind load, vibrations, earthquake are presented.
Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model
NASA Astrophysics Data System (ADS)
Lee, Myungeun; Kim, Jong Hyo
2012-02-01
Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.
Segmentation in low-penetration and low-involvement categories: an application to lottery games.
Guesalaga, Rodrigo; Marshall, Pablo
2013-09-01
Market segmentation is accepted as a fundamental concept in marketing and several authors have recently proposed a segmentation model where personal and environmental variables intersect with each other to form motivating conditions that drive behavior and preferences. This model of segmentation has been applied to packaged goods. This paper extends this literature by proposing a segmentation model for low-penetration and low involvement (LP-LI) products. An application to the lottery games in Chile supports the proposed model. The results of the study show that in this type of products (LP-LI), the attitude towards the product category is the most important factor that distinguishes consumers from non consumers, and heavy users from light users, and consequently, a critical segmentation variable. In addition, a cluster analysis shows the existence of three segments: (1) the impulsive dreamers, who believe in chance, and in that lottery games can change their life, (2) the skeptical, that do not believe in chance, nor in that lottery games can change their life and (3) the willing, who value the benefits of playing.
Segmentation of 3d Models for Cultural Heritage Structural Analysis - Some Critical Issues
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Guidi, G.; De Luca, L.
2017-08-01
Cultural Heritage documentation and preservation has become a fundamental concern in this historical period. 3D modelling offers a perfect aid to record ancient buildings and artefacts and can be used as a valid starting point for restoration, conservation and structural analysis, which can be performed by using Finite Element Methods (FEA). The models derived from reality-based techniques, made up of the exterior surfaces of the objects captured at high resolution, are - for this reason - made of millions of polygons. Such meshes are not directly usable in structural analysis packages and need to be properly pre-processed in order to be transformed in volumetric meshes suitable for FEA. In addition, dealing with ancient objects, a proper segmentation of 3D volumetric models is needed to analyse the behaviour of the structure with the most suitable level of detail for the different sections of the structure under analysis. Segmentation of 3D models is still an open issue, especially when dealing with ancient, complicated and geometrically complex objects that imply the presence of anomalies and gaps, due to environmental agents such as earthquakes, pollution, wind and rain, or human factors. The aims of this paper is to critically analyse some of the different methodologies and algorithms available to segment a 3D point cloud or a mesh, identifying difficulties and problems by showing examples on different structures.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Dispersed Fringe Sensing Analysis - DFSA
NASA Technical Reports Server (NTRS)
Sigrist, Norbert; Shi, Fang; Redding, David C.; Basinger, Scott A.; Ohara, Catherine M.; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.; Spechler, Joshua A.
2012-01-01
Dispersed Fringe Sensing (DFS) is a technique for measuring and phasing segmented telescope mirrors using a dispersed broadband light image. DFS is capable of breaking the monochromatic light ambiguity, measuring absolute piston errors between segments of large segmented primary mirrors to tens of nanometers accuracy over a range of 100 micrometers or more. The DFSA software tool analyzes DFS images to extract DFS encoded segment piston errors, which can be used to measure piston distances between primary mirror segments of ground and space telescopes. This information is necessary to control mirror segments to establish a smooth, continuous primary figure needed to achieve high optical quality. The DFSA tool is versatile, allowing precise piston measurements from a variety of different optical configurations. DFSA technology may be used for measuring wavefront pistons from sub-apertures defined by adjacent segments (such as Keck Telescope), or from separated sub-apertures used for testing large optical systems (such as sub-aperture wavefront testing for large primary mirrors using auto-collimating flats). An experimental demonstration of the coarse-phasing technology with verification of DFSA was performed at the Keck Telescope. DFSA includes image processing, wavelength and source spectral calibration, fringe extraction line determination, dispersed fringe analysis, and wavefront piston sign determination. The code is robust against internal optical system aberrations and against spectral variations of the source. In addition to the DFSA tool, the software package contains a simple but sophisticated MATLAB model to generate dispersed fringe images of optical system configurations in order to quickly estimate the coarse phasing performance given the optical and operational design requirements. Combining MATLAB (a high-level language and interactive environment developed by MathWorks), MACOS (JPL s software package for Modeling and Analysis for Controlled Optical Systems), and DFSA provides a unique optical development, modeling and analysis package to study current and future approaches to coarse phasing controlled segmented optical systems.
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Aalaei, Shima; Rajabi Naraki, Zahra; Nematollahi, Fatemeh; Beyabanaki, Elaheh; Shahrokhi Rad, Afsaneh
2017-01-01
Background. Screw-retained restorations are favored in some clinical situations such as limited inter-occlusal spaces. This study was designed to compare stresses developed in the peri-implant bone in two different types of screw-retained restorations (segmented vs. non-segmented abutment) using a finite element model. Methods. An implant, 4.1 mm in diameter and 10 mm in length, was placed in the first molar site of a mandibular model with 1 mm of cortical bone on the buccal and lingual sides. Segmented and non-segmented screw abutments with their crowns were placed on the simulated implant in each model. After loading (100 N, axial and 45° non-axial), von Mises stress was recorded using ANSYS software, version 12.0.1. Results. The maximum stresses in the non-segmented abutment screw were less than those of segmented abutment (87 vs. 100, and 375 vs. 430 MPa under axial and non-axial loading, respectively). The maximum stresses in the peri-implant bone for the model with segmented abutment were less than those of non-segmented ones (21 vs. 24 MPa, and 31 vs. 126 MPa under vertical and angular loading, respectively). In addition, the micro-strain of peri-implant bone for the segmented abutment restoration was less than that of non-segmented abutment. Conclusion. Under axial and non-axial loadings, non-segmented abutment showed less stress concentration in the screw, while there was less stress and strain in the peri-implant bone in the segmented abutment. PMID:29184629
Computer model of cardiovascular control system responses to exercise
NASA Technical Reports Server (NTRS)
Croston, R. C.; Rummel, J. A.; Kay, F. J.
1973-01-01
Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.
NASA Astrophysics Data System (ADS)
Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.
2013-03-01
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.
End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas
NASA Astrophysics Data System (ADS)
Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.
2017-11-01
Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images
Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.
2010-01-01
High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043
Patient Segmentation Analysis Offers Significant Benefits For Integrated Care And Support.
Vuik, Sabine I; Mayer, Erik K; Darzi, Ara
2016-05-01
Integrated care aims to organize care around the patient instead of the provider. It is therefore crucial to understand differences across patients and their needs. Segmentation analysis that uses big data can help divide a patient population into distinct groups, which can then be targeted with care models and intervention programs tailored to their needs. In this article we explore the potential applications of patient segmentation in integrated care. We propose a framework for population strategies in integrated care-whole populations, subpopulations, and high-risk populations-and show how patient segmentation can support these strategies. Through international case examples, we illustrate practical considerations such as choosing a segmentation logic, accessing data, and tailoring care models. Important issues for policy makers to consider are trade-offs between simplicity and precision, trade-offs between customized and off-the-shelf solutions, and the availability of linked data sets. We conclude that segmentation can provide many benefits to integrated care, and we encourage policy makers to support its use. Project HOPE—The People-to-People Health Foundation, Inc.
Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT
NASA Astrophysics Data System (ADS)
Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.
2009-11-01
Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2013-10-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van
2013-01-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521
Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers
Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling
2017-01-01
Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the “all parallel” shielding coils with a 45° starting position have the best shielding performance, whereas the “separated loop” shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same. PMID:28587137
Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.
Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling
2017-05-26
Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa
2015-04-13
Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of the stress-deformed condition of the disassembly parabolic antenna
NASA Astrophysics Data System (ADS)
Odinets, M. N.; Kaygorodtseva, N. V.; Krysova, I. V.
2018-01-01
Active development of satellite communications and computer-aided design systems raises the problem of designing parabolic antennas on a new round of development. The aim of the work was to investigate the influence of the design of the mirror of a parabolic antenna on its endurance under wind load. The research task was an automated analysis of the stress-deformed condition of various designs of computer models of a paraboloid mirror (segmented or holistic) at modeling the exploitation conditions. The peculiarity of the research was that the assembly model of the antenna’s mirror was subjected to rigid connections on the contacting surfaces of the segments and only then the finite element grid was generated. The analysis showed the advantage of the design of the demountable antenna, which consists of cyclic segments, in front of the construction of the holistic antenna. Calculation of the stress-deformed condition of the antennas allows us to conclude that dividing the design of the antenna’s mirror on parabolic and cyclic segments increases it strength and rigidity. In the future, this can be used to minimize the mass of antenna and the dimensions of the disassembled antenna. The presented way of modeling a mirror of a parabolic antenna using to the method of the finite-element analysis can be used in the production of antennas.
Small rural hospitals: an example of market segmentation analysis.
Mainous, A G; Shelby, R L
1991-01-01
In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.
A classification tree based modeling approach for segment related crashes on multilane highways.
Pande, Anurag; Abdel-Aty, Mohamed; Das, Abhishek
2010-10-01
This study presents a classification tree based alternative to crash frequency analysis for analyzing crashes on mid-block segments of multilane arterials. The traditional approach of modeling counts of crashes that occur over a period of time works well for intersection crashes where each intersection itself provides a well-defined unit over which to aggregate the crash data. However, in the case of mid-block segments the crash frequency based approach requires segmentation of the arterial corridor into segments of arbitrary lengths. In this study we have used random samples of time, day of week, and location (i.e., milepost) combinations and compared them with the sample of crashes from the same arterial corridor. For crash and non-crash cases, geometric design/roadside and traffic characteristics were derived based on their milepost locations. The variables used in the analysis are non-event specific and therefore more relevant for roadway safety feature improvement programs. First classification tree model is a model comparing all crashes with the non-crash data and then four groups of crashes (rear-end, lane-change related, pedestrian, and single-vehicle/off-road crashes) are separately compared to the non-crash cases. The classification tree models provide a list of significant variables as well as a measure to classify crash from non-crash cases. ADT along with time of day/day of week are significantly related to all crash types with different groups of crashes being more likely to occur at different times. From the classification performance of different models it was apparent that using non-event specific information may not be suitable for single vehicle/off-road crashes. The study provides the safety analysis community an additional tool to assess safety without having to aggregate the corridor crash data over arbitrary segment lengths. Copyright © 2010. Published by Elsevier Ltd.
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
NASA Astrophysics Data System (ADS)
Febriani, F.; Handayani, L.; Setyani, A.; Anggono, T.; Syuhada; Soedjatmiko, B.
2018-03-01
The dimensionality and regional strike analyses of the Cimandiri Fault, West Java, Indonesia have been investigated. The Cimandiri Fault consists of six segments. They are Loji, Cidadap, Nyalindung, Cibeber, Saguling and Padalarang segments. The magnetotelluric (MT) investigation was done in the Cibeber segment. There were 42 observation points of the magnetotelluric data, which were distributed along 2 lines. The magnetotelluric phase tensor has been applied to determine the dimensionality and regional strike of the Cibeber segment, Cimandiri Fault, West Java. The result of the dimensionality analysis shows that the range values of the skew angle value which indicate the dimensionality of the study area are -5 ≤ β ≥ 5. These values indicate if we would like to generate the subsurface model of the Cibeber segment by using the magnetotelluric data, it is safe to assume that the Cibeber segment has the 2-D. While the regional strike analysis presents that the regional strike of the Cibeber segment is about N70-80°E.
Marsiglia, Flavio F.; Kulis, Stephen; Kellison, Joshua G.
2010-01-01
Objectives. Under an ecodevelopmental framework, we examined lifetime segmented assimilation trajectories (diverging assimilation pathways influenced by prior life conditions) and related them to quality-of-life indicators in a diverse sample of 258 men in the Pheonix, AZ, metropolitan area. Methods. We used a growth mixture model analysis of lifetime changes in socioeconomic status, and used acculturation to identify distinct lifetime segmented assimilation trajectory groups, which we compared on life satisfaction, exercise, and dietary behaviors. We hypothesized that lifetime assimilation change toward mainstream American culture (upward assimilation) would be associated with favorable health outcomes, and downward assimilation change with unfavorable health outcomes. Results. A growth mixture model latent class analysis identified 4 distinct assimilation trajectory groups. In partial support of the study hypotheses, the extreme upward assimilation trajectory group (the most successful of the assimilation pathways) exhibited the highest life satisfaction and the lowest frequency of unhealthy food consumption. Conclusions. Upward segmented assimilation is associated in adulthood with certain positive health outcomes. This may be the first study to model upward and downward lifetime segmented assimilation trajectories, and to associate these with life satisfaction, exercise, and dietary behaviors. PMID:20167890
Selecting salient frames for spatiotemporal video modeling and segmentation.
Song, Xiaomu; Fan, Guoliang
2007-12-01
We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.
Planning Inmarsat's second generation of spacecraft
NASA Astrophysics Data System (ADS)
Williams, W. P.
1982-09-01
The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.
ECG signal analysis through hidden Markov models.
Andreão, Rodrigo V; Dorizzi, Bernadette; Boudy, Jérôme
2006-08-01
This paper presents an original hidden Markov model (HMM) approach for online beat segmentation and classification of electrocardiograms. The HMM framework has been visited because of its ability of beat detection, segmentation and classification, highly suitable to the electrocardiogram (ECG) problem. Our approach addresses a large panel of topics some of them never studied before in other HMM related works: waveforms modeling, multichannel beat segmentation and classification, and unsupervised adaptation to the patient's ECG. The performance was evaluated on the two-channel QT database in terms of waveform segmentation precision, beat detection and classification. Our waveform segmentation results compare favorably to other systems in the literature. We also obtained high beat detection performance with sensitivity of 99.79% and a positive predictivity of 99.96%, using a test set of 59 recordings. Moreover, premature ventricular contraction beats were detected using an original classification strategy. The results obtained validate our approach for real world application.
Meta-shell Approach for Constructing Lightweight and High Resolution X-Ray Optics
NASA Technical Reports Server (NTRS)
McClelland, Ryan S.
2016-01-01
Lightweight and high resolution optics are needed for future space-based x-ray telescopes to achieve advances in high-energy astrophysics. Past missions such as Chandra and XMM-Newton have achieved excellent angular resolution using a full shell mirror approach. Other missions such as Suzaku and NuSTAR have achieved lightweight mirrors using a segmented approach. This paper describes a new approach, called meta-shells, which combines the fabrication advantages of segmented optics with the alignment advantages of full shell optics. Meta-shells are built by layering overlapping mirror segments onto a central structural shell. The resulting optic has the stiffness and rotational symmetry of a full shell, but with an order of magnitude greater collecting area. Several meta-shells so constructed can be integrated into a large x-ray mirror assembly by proven methods used for Chandra and XMM-Newton. The mirror segments are mounted to the meta-shell using a novel four point semi-kinematic mount. The four point mount deterministically locates the segment in its most performance sensitive degrees of freedom. Extensive analysis has been performed to demonstrate the feasibility of the four point mount and meta-shell approach. A mathematical model of a meta-shell constructed with mirror segments bonded at four points and subject to launch loads has been developed to determine the optimal design parameters, namely bond size, mirror segment span, and number of layers per meta-shell. The parameters of an example 1.3 m diameter mirror assembly are given including the predicted effective area. To verify the mathematical model and support opto-mechanical analysis, a detailed finite element model of a meta-shell was created. Finite element analysis predicts low gravity distortion and low thermal distortion. Recent results are discussed including Structural Thermal Optical Performance (STOP) analysis as well as vibration and shock testing of prototype meta-shells.
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.
2011-03-01
Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.
Safety analysis of urban arterials at the meso level.
Li, Jia; Wang, Xuesong
2017-11-01
Urban arterials form the main structure of street networks. They typically have multiple lanes, high traffic volume, and high crash frequency. Classical crash prediction models investigate the relationship between arterial characteristics and traffic safety by treating road segments and intersections as isolated units. This micro-level analysis does not work when examining urban arterial crashes because signal spacing is typically short for urban arterials, and there are interactions between intersections and road segments that classical models do not accommodate. Signal spacing also has safety effects on both intersections and road segments that classical models cannot fully account for because they allocate crashes separately to intersections and road segments. In addition, classical models do not consider the impact on arterial safety of the immediately surrounding street network pattern. This study proposes a new modeling methodology that will offer an integrated treatment of intersections and road segments by combining signalized intersections and their adjacent road segments into a single unit based on road geometric design characteristics and operational conditions. These are called meso-level units because they offer an analytical approach between micro and macro. The safety effects of signal spacing and street network pattern were estimated for this study based on 118 meso-level units obtained from 21 urban arterials in Shanghai, and were examined using CAR (conditional auto regressive) models that corrected for spatial correlation among the units within individual arterials. Results showed shorter arterial signal spacing was associated with higher total and PDO (property damage only) crashes, while arterials with a greater number of parallel roads were associated with lower total, PDO, and injury crashes. The findings from this study can be used in the traffic safety planning, design, and management of urban arterials. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren
2016-01-01
Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective reviewing. PMID:26812647
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi; Gu, Haozhong; Liu, Qiang; Chattopadhyay, Aditi; Zhou, Xu
2000-08-01
The use of a special type of smart material, known as segmented constrained layer (SCL) damping, is investigated for improved rotor aeromechanical stability. The rotor blade load-carrying member is modeled using a composite box beam with arbitrary wall thickness. The SCLs are bonded to the upper and lower surfaces of the box beam to provide passive damping. A finite-element model based on a hybrid displacement theory is used to accurately capture the transverse shear effects in the composite primary structure and the viscoelastic and the piezoelectric layers within the SCL. Detailed numerical studies are presented to assess the influence of the number of actuators and their locations for improved aeromechanical stability. Ground and air resonance analysis models are implemented in the rotor blade built around the composite box beam with segmented SCLs. A classic ground resonance model and an air resonance model are used in the rotor-body coupled stability analysis. The Pitt dynamic inflow model is used in the air resonance analysis under hover condition. Results indicate that the surface bonded SCLs significantly increase rotor lead-lag regressive modal damping in the coupled rotor-body system.
Cohn, Wendy F; Lyman, Jason; Broshek, Donna K; Guterbock, Thomas M; Hartman, David; Kinzie, Mable; Mick, David; Pannone, Aaron; Sturz, Vanessa; Schubart, Jane; Garson, Arthur T
2018-01-01
To develop a model, based on market segmentation, to improve the quality and efficiency of health promotion materials and programs. Market segmentation to create segments (groups) based on a cross-sectional questionnaire measuring individual characteristics and preferences for health information. Educational and delivery recommendations developed for each group. General population of adults in Virginia. Random sample of 1201 Virginia residents. Respondents are representative of the general population with the exception of older age. Multiple factors known to impact health promotion including health status, health system utilization, health literacy, Internet use, learning styles, and preferences. Cluster analysis and discriminate analysis to create and validate segments. Common sized means to compare factors across segments. Developed educational and delivery recommendations matched to the 8 distinct segments. For example, the "health challenged and hard to reach" are older, lower literacy, and not likely to seek out health information. Their educational and delivery recommendations include a sixth-grade reading level, delivery through a provider, and using a "push" strategy. This model addresses a need to improve the efficiency and quality of health promotion efforts in an era of personalized medicine. It demonstrates that there are distinct groups with clearly defined educational and delivery recommendations. Health promotion professionals can consider Tailored Educational Approaches for Consumer Health to develop and deliver tailored materials to encourage behavior change.
Wang, Lei; Zhang, Huimao; He, Kan; Chang, Yan; Yang, Xiaodong
2015-01-01
Active contour models are of great importance for image segmentation and can extract smooth and closed boundary contours of the desired objects with promising results. However, they cannot work well in the presence of intensity inhomogeneity. Hence, a novel region-based active contour model is proposed by taking image intensities and 'vesselness values' from local phase-based vesselness enhancement into account simultaneously to define a novel multi-feature Gaussian distribution fitting energy in this paper. This energy is then incorporated into a level set formulation with a regularization term for accurate segmentations. Experimental results based on publicly available STructured Analysis of the Retina (STARE) demonstrate our model is more accurate than some existing typical methods and can successfully segment most small vessels with varying width.
Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.
2005-01-01
The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
Cunningham, Charles E; Zipursky, Robert B; Christensen, Bruce K; Bieling, Peter J; Madsen, Victoria; Rimas, Heather; Mielko, Stephanie; Wilson, Fiona; Furimsky, Ivana; Jeffs, Lisa; Munn, Catharine
2017-01-01
We modeled design factors influencing the intent to use a university mental health service. Between November 2012 and October 2014, 909 undergraduates participated. Using a discrete choice experiment, participants chose between hypothetical campus mental health services. Latent class analysis identified three segments. A Psychological/Psychiatric Service segment (45.5%) was most likely to contact campus health services delivered by psychologists or psychiatrists. An Alternative Service segment (39.3%) preferred to talk to peer-counselors who had experienced mental health problems. A Hesitant segment (15.2%) reported greater distress but seemed less intent on seeking help. They preferred services delivered by psychologists or psychiatrists. Simulations predicted that, rather than waiting for standard counseling, the Alternative Service segment would prefer immediate access to E-Mental health. The Usual Care and Hesitant segments would wait 6 months for standard counseling. E-Mental Health options could engage students who may not wait for standard services.
Analysis and testing of a soft actuation system for segmented reflector articulation and isolation
NASA Technical Reports Server (NTRS)
Jandura, Louise; Agronin, Michael L.
1991-01-01
Segmented reflectors have been proposed for space-based applications such as optical communication and large-diameter telescopes. An actuation system for mirrors in a space-based segmented mirror array has been developed as part of the National Aeronautics and Space Administration-sponsored Precision Segmented Reflector program. The actuation system, called the Articulated Panel Module (APM), articulates a mirror panel in 3 degrees of freedom in the submicron regime, isolates the panel from structural motion, and simplifies space assembly of the mirrors to the reflector backup truss. A breadboard of the APM has been built and is described. Three-axis modeling, analysis, and testing of the breadboard is discussed.
Optomechanical design software for segmented mirrors
NASA Astrophysics Data System (ADS)
Marrero, Juan
2016-08-01
The software package presented in this paper, still under development, was born to help analyzing the influence of the many parameters involved in the design of a large segmented mirror telescope. In summary, it is a set of tools which were added to a common framework as they were needed. Great emphasis has been made on the graphical presentation, as scientific visualization nowadays cannot be conceived without the use of a helpful 3d environment, showing the analyzed system as close to reality as possible. Use of third party software packages is limited to ANSYS, which should be available in the system only if the FEM results are needed. Among the various functionalities of the software, the next ones are worth mentioning here: automatic 3d model construction of a segmented mirror from a set of parameters, geometric ray tracing, automatic 3d model construction of a telescope structure around the defined mirrors from a set of parameters, segmented mirror human access assessment, analysis of integration tolerances, assessment of segments collision, structural deformation under gravity and thermal variation, mirror support system analysis including warping harness mechanisms, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The model is designed to enable decision makers to compare the economics of geothermal projects with the economics of alternative energy systems at an early stage in the decision process. The geothermal engineering and economic feasibility computer model (GEEF) is written in FORTRAN IV language and can be run on a mainframe or a mini-computer system. An abbreviated version of the model is being developed for usage in conjunction with a programmable desk calculator. The GEEF model has two main segments, namely (i) the engineering design/cost segment and (ii) the economic analysis segment. In the engineering segment, the model determinesmore » the numbers of production and injection wells, heat exchanger design, operating parameters for the system, requirement of supplementary system (to augment the working fluid temperature if the resource temperature is not sufficiently high), and the fluid flow rates. The model can handle single stage systems as well as two stage cascaded systems in which the second stage may involve a space heating application after a process heat application in the first stage.« less
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI
NASA Astrophysics Data System (ADS)
Pei, Linmin; Reza, Syed M. S.; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M.
2017-03-01
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. To model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI.
Pei, Linmin; Reza, Syed M S; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M
2017-02-11
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. In order to model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Comparison of results of experimental research with numerical calculations of a model one-sided seal
NASA Astrophysics Data System (ADS)
Joachimiak, Damian; Krzyślak, Piotr
2015-06-01
Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.
Analysis of TMT primary mirror control-structure interaction
NASA Astrophysics Data System (ADS)
MacMynowski, Douglas G.; Thompson, Peter M.; Sirota, Mark J.
2008-07-01
The primary mirror control system (M1CS) keeps the 492 segments of the Thirty Meter Telescope primary mirror aligned in the presence of disturbances. A global position control loop uses feedback from inter-segment edge sensors to three actuators behind each segment that control segment piston, tip and tilt. If soft force actuators are used (e.g. voice-coil), then in addition to the global position loop there will be a local servo loop to provide stiffness. While the M1 control system at Keck compensates only for slow disturbances such as gravity and thermal variations, the M1CS for TMT will need to provide some compensation for higher frequency wind disturbances in order to meet stringent error budget targets. An analysis of expected high-wavenumber wind forces on M1 suggests that a 1Hz control bandwidth is required for the global feedback of segment edge-sensorbased position information in order to minimize high spatial frequency segment response for both seeing-limited and adaptive optics performance. A much higher bandwidth is required from the local servo loop to provide adequate stiffness to wind or acoustic disturbances. A related paper presents the control designs for the local actuator servo loops. The disturbance rejection requirements would not be difficult to achieve for a single segment, but the structural coupling between segments mounted on a flexible mirror cell results in controlstructure interaction (CSI) that limits the achievable bandwidth. Using a combination of simplified modeling to build intuition and the full telescope finite element model for verification, we present designs and analysis for both the local servo loop and global loop demonstrating sufficient bandwidth and resulting wind-disturbance rejection despite the presence of CSI.
NASA Astrophysics Data System (ADS)
Jeong, Sinwoo; Cho, Jae Yong; Sung, Tae Hyun; Yoo, Hong Hee
2017-03-01
Conventional vibration-based piezoelectric energy harvesters (PEHs) have advantages including the ubiquity of their energy source and their ease of manufacturing. However, they have a critical disadvantage as well: they can produce a reasonable amount of power only if the excitation frequency is concentrated near a natural frequency of the PEH. Because the excitation frequency is often spread and/or variable, it is very difficult to successfully design a conventional PEH. In this paper, we propose a new cantilevered PEH whose design includes an attached mass and a segmented piezoelectric layer. By choosing a proper size and location for the attached mass, the gap between the first and second natural frequencies of the PEH can be decreased in order to broaden the effective excitation frequency range and thus to allow reasonable power generation. Especially, the output power performance improves significantly around the second natural frequency of the PEH since the voltage cancellation effect can be made very weak by segmenting the piezoelectric layer at an appropriate location. To investigate the power performance of the new PEH, herein a reduced-order electromechanical analysis model is proposed and the accuracy of this model is validated experimentally. The effects of variable load resistance and piezoelectric layer segmentation location upon the power performance of the new PEH are investigated by means of the reduced-order analysis model.
A system for the analysis of foot and ankle kinematics during gait.
Kidder, S M; Abuzzahab, F S; Harris, G F; Johnson, J E
1996-03-01
A five-camera Vicon (Oxford Metrics, Oxford, England) motion analysis system was used to acquire foot and ankle motion data. Static resolution and accuracy were computed as 0.86 +/- 0.13 mm and 98.9%, while dynamic resolution and accuracy were 0.1 +/- 0.89 and 99.4% (sagittal plane). Spectral analysis revealed high frequency noise and the need for a filter (6 Hz Butterworth low-pass) as used in similar clinical situations. A four-segment rigid body model of the foot and ankle was developed. The four rigid body foot model segments were 1) tibia and fibula, 2) calcaneus, talus, and navicular, 3) cuneiforms, cuboid, and metatarsals, and 4) hallux. The Euler method for describing relative foot and ankle segment orientation was utilized in order to maintain accuracy and ease of clinical application. Kinematic data from a single test subject are presented.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Segmentation and determination of joint space width in foot radiographs
NASA Astrophysics Data System (ADS)
Schenk, O.; de Muinck Keizer, D. M.; Bernelot Moens, H. J.; Slump, C. H.
2016-03-01
Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30+/-0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Tan, Weng Chun; Mat Isa, Nor Ashidi
2016-01-01
In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.
Chain-Wise Generalization of Road Networks Using Model Selection
NASA Astrophysics Data System (ADS)
Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.
2017-05-01
Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using
NASA Astrophysics Data System (ADS)
Panu, U. S.; Ng, W.; Rasmussen, P. F.
2009-12-01
The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in data through the concepts of autocorrelation, average mutual information, and Hurst exponent, (ii) the ability of models to preserve the persistency within the homogenous dry/wet weather states through analysis of dry/wet-spell lengths between the observed and generated data, and (iii) the ability to assesses the goodness-of-fit of models through the likelihood estimates (i.e., AIC and BIC). Past 30 years of observed daily precipitation records from 10 Canadian meteorological stations were utilized for comparative analyses of the three models. In general, the Markov chain model performed well. The remainders of the models were found to be competitive from one another depending upon the scope and purpose of the comparison. Although the Markov chain model has a certain advantage in the generation of daily precipitation occurrences, the structural flexibility offered by the Dictionary approach in modeling the varied segment lengths of heterogeneous weather states provides a distinct and powerful advantage in the generation of precipitation sequences.
Figure-Ground Segmentation Using Factor Graphs
Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr
2009-01-01
Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Kilinç, Yeliz; Erkmen, Erkan; Kurt, Ahmet
2016-01-01
In this study, the biomechanical behavior of different fixation methods used to fix the mandibular anterior segment following various amounts of superior repositioning was evaluated by using Finite Element Analysis (FEA). The three-dimensional finite element models representing 3 and 5 mm superior repositioning were generated. The gap in between segments was assumed to be filled by block bone allograft and resignated to be in perfect contact with the mandible and segmented bone. Six different finite element models with 2 distinct mobilization rate including 3 different fixation configurations, double right L (DRL), double left L (DLL), or double I (DI) miniplates with monocortical screws, correspondingly were created. A comparative evaluation has been made under vertical, horizontal and oblique loads. The von Mises and principal maximum stress (Pmax) values were calculated by finite element solver programme. The first part of our ongoing Finite Element Analysis research has been addressed to the mechanical behavior of the same fixation configurations in nongrafted models. In comparison with the findings of the first part of the study, it was concluded that bone graft offers superior mechanical stability without any limitation of mobilization and less stress on the fixative appliances as well as in the bone.
Mathematical models used in segmentation and fractal methods of 2-D ultrasound images
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin
2012-11-01
Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.
Design oriented structural analysis
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1994-01-01
Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.
Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less
Haga, Yutaka; Dominique, Vincent J; Du, Shao Jun
2009-10-01
To characterize the process of vertebral segmentation and disc formation in living animals, we analyzed tiggy-winkle hedgehog (twhh):green fluorescent protein (gfp) and sonic hedgehog (shh):gfp transgenic zebrafish models that display notochord-specific GFP expression. We found that they showed distinct patterns of expression in the intervertebral discs of late stage fish larvae and adult zebrafish. A segmented pattern of GFP expression was detected in the intervertebral disc of twhh:gfp transgenic fish. In contrast, little GFP expression was found in the intervertebral disc of shh:gfp transgenic fish. Treating twhh:gfp transgenic zebrafish larvae with exogenous retinoic acid (RA), a teratogenic factor on normal development, resulted in disruption of notochord segmentation and formation of oversized vertebrae. Histological analysis revealed that the oversized vertebrae are likely due to vertebral fusion. These studies demonstrate that the twhh:gfp transgenic zebrafish is a useful model for studying vertebral segmentation and disc formation, and moreover, that RA signaling may play a role in this process.
A wavelet-based Bayesian framework for 3D object segmentation in microscopy
NASA Astrophysics Data System (ADS)
Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil
2012-03-01
In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
NASA Technical Reports Server (NTRS)
Orr, R. S.
1984-01-01
Tracking and data acquisition system (TDAS) requirements, TDAS architectural goals, enhanced TDAS subsystems, constellation and networking options, TDAS spacecraft options, crosslink implementation, baseline TDAS space segment architecture, and treat model development/security analysis are addressed.
Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion
NASA Technical Reports Server (NTRS)
Mazzawy, R. S.; Banks, G. A.
1976-01-01
Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.
Modeling and analysis of passive dynamic bipedal walking with segmented feet and compliant joints
NASA Astrophysics Data System (ADS)
Huang, Yan; Wang, Qi-Ning; Gao, Yue; Xie, Guang-Ming
2012-10-01
Passive dynamic walking has been developed as a possible explanation for the efficiency of the human gait. This paper presents a passive dynamic walking model with segmented feet, which makes the bipedal walking gait more close to natural human-like gait. The proposed model extends the simplest walking model with the addition of flat feet and torsional spring based compliance on ankle joints and toe joints, to achieve stable walking on a slope driven by gravity. The push-off phase includes foot rotations around the toe joint and around the toe tip, which shows a great resemblance to human normal walking. This paper investigates the effects of the segmented foot structure on bipedal walking in simulations. The model achieves satisfactory walking results on even or uneven slopes.
Model Uncertainty and Test of a Segmented Mirror Telescope
2014-03-01
Optical Telescope project EOM: equation of motion FCA: fine control actuator FCD: Face-Centered Cubic Design FEA: finite element analysis FEM: finite...housed in a dark tent to isolate the telescope from stray light, air currents, or dust and other debris. However, the closed volume is prone to...is composed of six hexagonal segments that each have six coarse control actuators (CCA) for segment phasing control, three fine control actuators
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Habas, Piotr A.; Kim, Kio; Corbett-Detig, James M.; Rousseau, Francois; Glenn, Orit A.; Barkovich, A. James; Studholme, Colin
2010-01-01
Modeling and analysis of MR images of the developing human brain is a challenge due to rapid changes in brain morphology and morphometry. We present an approach to the construction of a spatiotemporal atlas of the fetal brain with temporal models of MR intensity, tissue probability and shape changes. This spatiotemporal model is created from a set of reconstructed MR images of fetal subjects with different gestational ages. Groupwise registration of manual segmentations and voxelwise nonlinear modeling allow us to capture the appearance, disappearance and spatial variation of brain structures over time. Applying this model to atlas-based segmentation, we generate age-specific MR templates and tissue probability maps and use them to initialize automatic tissue delineation in new MR images. The choice of model parameters and the final performance are evaluated using clinical MR scans of young fetuses with gestational ages ranging from 20.57 to 24.71 weeks. Experimental results indicate that quadratic temporal models can correctly capture growth-related changes in the fetal brain anatomy and provide improvement in accuracy of atlas-based tissue segmentation. PMID:20600970
Model-based segmentation of hand radiographs
NASA Astrophysics Data System (ADS)
Weiler, Frank; Vogelsang, Frank
1998-06-01
An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
NASA Astrophysics Data System (ADS)
Bennett, S. E. K.; DuRoss, C. B.; Reitman, N. G.; Devore, J. R.; Hiscock, A.; Gold, R. D.; Briggs, R. W.; Personius, S. F.
2014-12-01
Paleoseismic data near fault segment boundaries constrain the extent of past surface ruptures and the persistence of rupture termination at segment boundaries. Paleoseismic evidence for large (M≥7.0) earthquakes on the central Holocene-active fault segments of the 350-km-long Wasatch fault zone (WFZ) generally supports single-segment ruptures but also permits multi-segment rupture scenarios. The extent and frequency of ruptures that span segment boundaries remains poorly known, adding uncertainty to seismic hazard models for this populated region of Utah. To address these uncertainties we conducted four paleoseismic investigations near the Salt Lake City-Provo and Provo-Nephi segment boundaries of the WFZ. We examined an exposure of the WFZ at Maple Canyon (Woodland Hills, UT) and excavated the Flat Canyon trench (Salem, UT), 7 and 11 km, respectively, from the southern tip of the Provo segment. We document evidence for at least five earthquakes at Maple Canyon and four to seven earthquakes that post-date mid-Holocene fan deposits at Flat Canyon. These earthquake chronologies will be compared to seven earthquakes observed in previous trenches on the northern Nephi segment to assess rupture correlation across the Provo-Nephi segment boundary. To assess rupture correlation across the Salt Lake City-Provo segment boundary we excavated the Alpine trench (Alpine, UT), 1 km from the northern tip of the Provo segment, and the Corner Canyon trench (Draper, UT) 1 km from the southern tip of the Salt Lake City segment. We document evidence for six earthquakes at both sites. Ongoing geochronologic analysis (14C, optically stimulated luminescence) will constrain earthquake chronologies and help identify through-going ruptures across these segment boundaries. Analysis of new high-resolution (0.5m) airborne LiDAR along the entire WFZ will quantify latest Quaternary displacements and slip rates and document spatial and temporal slip patterns near fault segment boundaries.
Pre-operative segmentation of neck CT datasets for the planning of neck dissections
NASA Astrophysics Data System (ADS)
Cordes, Jeanette; Dornheim, Jana; Preim, Bernhard; Hertel, Ilka; Strauss, Gero
2006-03-01
For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.
Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2018-06-01
Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.
Structural constraints in the packaging of bluetongue virus genomic segments
Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C.
2014-01-01
The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by biochemical data analysis suggested that a conformational motif formed by interaction of the 5′ and 3′ ends of the molecule was necessary and sufficient for packaging. A similar structural signal was also identified in S8 of BTV serotype 1. Furthermore, the same conformational analysis of secondary structures for positive-sense ssRNAs was used to generate a chimeric segment that maintained the putative packaging motif but contained unrelated internal sequences. This chimeric segment was packaged successfully, confirming that the motif identified directs the correct packaging of the segment. PMID:24980574
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Modeling the relaxation of internal DNA segments during genome mapping in nanochannels.
Jain, Aashish; Sheats, Julian; Reifenberger, Jeffrey G; Cao, Han; Dorfman, Kevin D
2016-09-01
We have developed a multi-scale model describing the dynamics of internal segments of DNA in nanochannels used for genome mapping. In addition to the channel geometry, the model takes as its inputs the DNA properties in free solution (persistence length, effective width, molecular weight, and segmental hydrodynamic radius) and buffer properties (temperature and viscosity). Using pruned-enriched Rosenbluth simulations of a discrete wormlike chain model with circa 10 base pair resolution and a numerical solution for the hydrodynamic interactions in confinement, we convert these experimentally available inputs into the necessary parameters for a one-dimensional, Rouse-like model of the confined chain. The resulting coarse-grained model resolves the DNA at a length scale of approximately 6 kilobase pairs in the absence of any global hairpin folds, and is readily studied using a normal-mode analysis or Brownian dynamics simulations. The Rouse-like model successfully reproduces both the trends and order of magnitude of the relaxation time of the distance between labeled segments of DNA obtained in experiments. The model also provides insights that are not readily accessible from experiments, such as the role of the molecular weight of the DNA and location of the labeled segments that impact the statistical models used to construct genome maps from data acquired in nanochannels. The multi-scale approach used here, while focused towards a technologically relevant scenario, is readily adapted to other channel sizes and polymers.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2017-01-01
Introduction The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. Methods A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. Results High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. Conclusions The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. PMID:27476365
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2016-08-01
The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Market segmentation strategy in internet market
NASA Astrophysics Data System (ADS)
Ren, Yawei; Yang, Deli; Diao, Xinjun
2010-04-01
This paper presents a model to describe the competitive dynamics of web sites on the WWW market and analyze the stability of the model which is composed of one powerful site and two small sites. One of the most important results that emerge from this simple model is that strong competition among websites does not necessarily lead to the demise of the small website on the WWW market. From the stability analysis of the model, we obtain a series of conditions in which small sites can obtain competitive advantages by using the market segmentation strategy.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
Cerebrovascular plaque segmentation using object class uncertainty snake in MR images
NASA Astrophysics Data System (ADS)
Das, Bipul; Saha, Punam K.; Wolf, Ronald; Song, Hee Kwon; Wright, Alexander C.; Wehrli, Felix W.
2005-04-01
Atherosclerotic cerebrovascular disease leads to formation of lipid-laden plaques that can form emboli when ruptured causing blockage to cerebral vessels. The clinical manifestation of this event sequence is stroke; a leading cause of disability and death. In vivo MR imaging provides detailed image of vascular architecture for the carotid artery making it suitable for analysis of morphological features. Assessing the status of carotid arteries that supplies blood to the brain is of primary interest to such investigations. Reproducible quantification of carotid artery dimensions in MR images is essential for plaque analysis. Manual segmentation being the only method presently makes it time consuming and sensitive to inter and intra observer variability. This paper presents a deformable model for lumen and vessel wall segmentation of carotid artery from MR images. The major challenges of carotid artery segmentation are (a) low signal-to-noise ratio, (b) background intensity inhomogeneity and (c) indistinct inner and/or outer vessel wall. We propose a new, effective object-class uncertainty based deformable model with additional features tailored toward this specific application. Object-class uncertainty optimally utilizes MR intensity characteristics of various anatomic entities that enable the snake to avert leakage through fuzzy boundaries. To strengthen the deformable model for this application, some other properties are attributed to it in the form of (1) fully arc-based deformation using a Gaussian model to maximally exploit vessel wall smoothness, (2) construction of a forbidden region for outer-wall segmentation to reduce interferences by prominent lumen features and (3) arc-based landmark for efficient user interaction. The algorithm has been tested upon T1- and PD- weighted images. Measures of lumen area and vessel wall area are computed from segmented data of 10 patient MR images and their accuracy and reproducibility are examined. These results correspond exceptionally well with manual segmentation completed by radiology experts. Reproducibility of the proposed method is estimated for both intra- and inter-operator studies.
Accident models for two-lane rural roads : segments and intersections
DOT National Transportation Integrated Search
1998-10-01
This report is a direct step for the implementation of the Accident Analysis Module in the Interactive Highway Safety Design Model (IHSDM). The Accident Analysis Module is expected to estimate the safety of two-lane rural highway characteristics for ...
A mathematical analysis to address the 6 degree-of-freedom segmental power imbalance.
Ebrahimi, Anahid; Collins, John D; Kepple, Thomas M; Takahashi, Kota Z; Higginson, Jill S; Stanhope, Steven J
2018-01-03
Segmental power is used in human movement analyses to indicate the source and net rate of energy transfer between the rigid bodies of biomechanical models. Segmental power calculations are performed using segment endpoint dynamics (kinetic method). A theoretically equivalent method is to measure the rate of change in a segment's mechanical energy state (kinematic method). However, these two methods have not produced experimentally equivalent results for segments proximal to the foot, with the difference in methods deemed the "power imbalance." In a 6 degree-of-freedom model, segments move independently, resulting in relative segment endpoint displacement and non-equivalent segment endpoint velocities at a joint. In the kinetic method, a segment's distal end translational velocity may be defined either at the anatomical end of the segment or at the location of the joint center (defined here as the proximal end of the adjacent distal segment). Our mathematical derivations revealed the power imbalance between the kinetic method using the anatomical definition and the kinematic method can be explained by power due to relative segment endpoint displacement. In this study, we tested this analytical prediction through experimental gait data from nine healthy subjects walking at a typical speed. The average absolute segmental power imbalance was reduced from 0.023 to 0.046 W/kg using the anatomical definition to ≤0.001 W/kg using the joint center definition in the kinetic method (95.56-98.39% reduction). Power due to relative segment endpoint displacement in segmental power analyses is substantial and should be considered in analyzing energetic flow into and between segments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Immunohistologic analysis of spontaneous recurrent laryngeal nerve reinnervation in a rat model.
Rosko, Andrew J; Kupfer, Robbi A; Oh, Sang S; Haring, Catherine T; Feldman, Eva L; Hogikyan, Norman D
2018-03-01
After recurrent laryngeal nerve injury (RLN), spontaneous reinnervation of the larynx occurs with input from multiple sources. The purpose of this study was to determine the timing and efficiency of reinnervation across a resected RLN segment in a rat model of RLN injury. Animal study. Twelve male 60-day-old Sprague Dawley rats underwent resection of a 5-mm segment of the right RLN. Rats were sacrificed at 1, 2, 4, and 12 weeks after nerve injury to harvest the larynx and trachea for immunohistologic analysis. The distal RLN segment was stained with neurofilament, and axons were counted and compared to the nonoperated side. Thyroarytenoid (TA) muscles were stained with alpha-bungarotoxin, synaptophysin, and neurofilament to identify intact neuromuscular junctions (NMJ). The number of intact NMJs from the denervated side was compared to the nonoperated side. Nerve fibers regenerated across the resected RLN gap into the distal recurrent laryngeal nerve to innervate the TA muscle. The number of nerve fibers in the distal nerve segment increased over time and reached the normal number by 12 weeks postdenervation. Axons formed intact neuromuscular junctions in the TA, with 48.8% ± 16.7% of the normal number of intact NMJs at 4 weeks and 88.3% ± 30.1% of the normal number by 12 weeks. Following resection of an RLN segment in a rat model, nerve fibers spontaneously regenerate through the distal segment of the transected nerve and form intact NMJs in order to reinnervate the TA muscle. NA. Laryngoscope, 128:E117-E122, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
1991-12-01
9 2.6.1 Multi-Shape Detection. .. .. .. .. .. .. ...... 9 Page 2.6.2 Line Segment Extraction and Re-Combination.. 9 2.6.3 Planimetric Feature... Extraction ............... 10 2.6.4 Line Segment Extraction From Statistical Texture Analysis .............................. 11 2.6.5 Edge Following as Graph...image after image, could benefit clue to the fact that major spatial characteristics of subregions could be extracted , and minor spatial changes could be
Pouch, Alison M.; Tian, Sijie; Takabe, Manabu; Wang, Hongzhi; Yuan, Jiefu; Cheung, Albert T.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.; Yushkevich, Paul A.
2015-01-01
3D echocardiographic (3DE) imaging is a useful tool for assessing the complex geometry of the aortic valve apparatus. Segmentation of this structure in 3DE images is a challenging task that benefits from shape-guided deformable modeling methods, which enable inter-subject statistical shape comparison. Prior work demonstrates the efficacy of using continuous medial representation (cm-rep) as a shape descriptor for valve leaflets. However, its application to the entire aortic valve apparatus is limited since the structure has a branching medial geometry that cannot be explicitly parameterized in the original cm-rep framework. In this work, we show that the aortic valve apparatus can be accurately segmented using a new branching medial modeling paradigm. The segmentation method achieves a mean boundary displacement of 0.6 ± 0.1 mm (approximately one voxel) relative to manual segmentation on 11 3DE images of normal open aortic valves. This study demonstrates a promising approach for quantitative 3DE analysis of aortic valve morphology. PMID:26247062
Statistical shape modeling of human cochlea: alignment and principal component analysis
NASA Astrophysics Data System (ADS)
Poznyakovskiy, Anton A.; Zahnert, Thomas; Fischer, Björn; Lasurashvili, Nikoloz; Kalaidzidis, Yannis; Mürbe, Dirk
2013-02-01
The modeling of the cochlear labyrinth in living subjects is hampered by insufficient resolution of available clinical imaging methods. These methods usually provide resolutions higher than 125 μm. This is too crude to record the position of basilar membrane and, as a result, keep apart even the scala tympani from other scalae. This problem could be avoided by the means of atlas-based segmentation. The specimens can endure higher radiation loads and, conversely, provide better-resolved images. The resulting surface can be used as the seed for atlas-based segmentation. To serve this purpose, we have developed a statistical shape model (SSM) of human scala tympani based on segmentations obtained from 10 μCT image stacks. After segmentation, we aligned the resulting surfaces using Procrustes alignment. This algorithm was slightly modified to accommodate single models with nodes which do not necessarily correspond to salient features and vary in number between models. We have established correspondence by mutual proximity between nodes. Rather than using the standard Euclidean norm, we have applied an alternative logarithmic norm to improve outlier treatment. The minimization was done using BFGS method. We have also split the surface nodes along an octree to reduce computation cost. Subsequently, we have performed the principal component analysis of the training set with Jacobi eigenvalue algorithm. We expect the resulting method to help acquiring not only better understanding in interindividual variations of cochlear anatomy, but also a step towards individual models for pre-operative diagnostics prior to cochlear implant insertions.
Point clouds segmentation as base for as-built BIM creation
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2015-08-01
In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P
2017-10-01
Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.
NASA Astrophysics Data System (ADS)
Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.
2016-03-01
In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).
Tumor propagation model using generalized hidden Markov model
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dustin
2017-02-01
Tumor tracking and progression analysis using medical images is a crucial task for physicians to provide accurate and efficient treatment plans, and monitor treatment response. Tumor progression is tracked by manual measurement of tumor growth performed by radiologists. Several methods have been proposed to automate these measurements with segmentation, but many current algorithms are confounded by attached organs and vessels. To address this problem, we present a new generalized tumor propagation model considering time-series prior images and local anatomical features using a Hierarchical Hidden Markov model (HMM) for tumor tracking. First, we apply the multi-atlas segmentation technique to identify organs/sub-organs using pre-labeled atlases. Second, we apply a semi-automatic direct 3D segmentation method to label the initial boundary between the lesion and neighboring structures. Third, we detect vessels in the ROI surrounding the lesion. Finally, we apply the propagation model with the labeled organs and vessels to accurately segment and measure the target lesion. The algorithm has been designed in a general way to be applicable to various body parts and modalities. In this paper, we evaluate the proposed algorithm on lung and lung nodule segmentation and tracking. We report the algorithm's performance by comparing the longest diameter and nodule volumes using the FDA lung Phantom data and a clinical dataset.
Cortical bone fracture analysis using XFEM - case study.
Idkaidek, Ashraf; Jasiuk, Iwona
2017-04-01
We aim to achieve an accurate simulation of human cortical bone fracture using the extended finite element method within a commercial finite element software abaqus. A two-dimensional unit cell model of cortical bone is built based on a microscopy image of the mid-diaphysis of tibia of a 70-year-old human male donor. Each phase of this model, an interstitial bone, a cement line, and an osteon, are considered linear elastic and isotropic with material properties obtained by nanoindentation, taken from literature. The effect of using fracture analysis methods (cohesive segment approach versus linear elastic fracture mechanics approach), finite element type, and boundary conditions (traction, displacement, and mixed) on cortical bone crack initiation and propagation are studied. In this study cohesive segment damage evolution for a traction separation law based on energy and displacement is used. In addition, effects of the increment size and mesh density on analysis results are investigated. We find that both cohesive segment and linear elastic fracture mechanics approaches within the extended finite element method can effectively simulate cortical bone fracture. Mesh density and simulation increment size can influence analysis results when employing either approach, and using finer mesh and/or smaller increment size does not always provide more accurate results. Both approaches provide close but not identical results, and crack propagation speed is found to be slower when using the cohesive segment approach. Also, using reduced integration elements along with the cohesive segment approach decreases crack propagation speed compared with using full integration elements. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Automatic segmentation of the puborectalis muscle in 3D transperineal ultrasound.
van den Noort, Frieda; Grob, Anique T M; Slump, Cornelis H; van der Vaart, Carl H; van Stralen, Marijn
2017-10-11
The introduction of 3D analysis of the puborectalis muscle, for diagnostic purposes, into daily practice is hindered by the need for appropriate training of the observers. Automatic 3D segmentation of the puborectalis muscle in 3D transperineal ultrasound may aid to its adaption in clinical practice. A manual 3D segmentation protocol was developed to segment the puborectalis muscle. The data of 20 women, in their first trimester of pregnancy, was used to validate the reproducibility of this protocol. For automatic segmentation, active appearance models of the puborectalis muscle were developed. Those models were trained using manual segmentation data of 50 women. The performance of both manual and automatic segmentation was analyzed by measuring the overlap and distance between the segmentations. Also, the interclass correlation coefficients and their 95% confidence intervals were determined for mean echogenicity and volume of the puborectalis muscle. The ICC values of mean echogenicity (0.968-0.991) and volume (0.626-0.910) are good to very good for both automatic and manual segmentation. The results of overlap and distance for manual segmentation are as expected, showing only few pixels (2-3) mismatch on average and a reasonable overlap. Based on overlap and distance 5 mismatches in automatic segmentation were detected, resulting in an automatic segmentation a success rate of 90%. In conclusion, this study presents a reliable manual and automatic 3D segmentation of the puborectalis muscle. This will facilitate future investigation of the puborectalis muscle. It also allows for reliable measurements of clinically potentially valuable parameters like mean echogenicity. This article is protected by copyright. All rights reserved.
An Approach for Reducing the Error Rate in Automated Lung Segmentation
Gill, Gurman; Beichel, Reinhard R.
2016-01-01
Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897
Computer-aided pulmonary image analysis in small animal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less
Bashir, Usman; Azad, Gurdip; Siddique, Muhammad Musib; Dhillon, Saana; Patel, Nikheel; Bassett, Paul; Landau, David; Goh, Vicky; Cook, Gary
2017-12-01
Measures of tumour heterogeneity derived from 18-fluoro-2-deoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) scans are increasingly reported as potential biomarkers of non-small cell lung cancer (NSCLC) for classification and prognostication. Several segmentation algorithms have been used to delineate tumours, but their effects on the reproducibility and predictive and prognostic capability of derived parameters have not been evaluated. The purpose of our study was to retrospectively compare various segmentation algorithms in terms of inter-observer reproducibility and prognostic capability of texture parameters derived from non-small cell lung cancer (NSCLC) 18 F-FDG PET/CT images. Fifty three NSCLC patients (mean age 65.8 years; 31 males) underwent pre-chemoradiotherapy 18 F-FDG PET/CT scans. Three readers segmented tumours using freehand (FH), 40% of maximum intensity threshold (40P), and fuzzy locally adaptive Bayesian (FLAB) algorithms. Intraclass correlation coefficient (ICC) was used to measure the inter-observer variability of the texture features derived by the three segmentation algorithms. Univariate cox regression was used on 12 commonly reported texture features to predict overall survival (OS) for each segmentation algorithm. Model quality was compared across segmentation algorithms using Akaike information criterion (AIC). 40P was the most reproducible algorithm (median ICC 0.9; interquartile range [IQR] 0.85-0.92) compared with FLAB (median ICC 0.83; IQR 0.77-0.86) and FH (median ICC 0.77; IQR 0.7-0.85). On univariate cox regression analysis, 40P found 2 out of 12 variables, i.e. first-order entropy and grey-level co-occurence matrix (GLCM) entropy, to be significantly associated with OS; FH and FLAB found 1, i.e., first-order entropy. For each tested variable, survival models for all three segmentation algorithms were of similar quality, exhibiting comparable AIC values with overlapping 95% CIs. Compared with both FLAB and FH, segmentation with 40P yields superior inter-observer reproducibility of texture features. Survival models generated by all three segmentation algorithms are of at least equivalent utility. Our findings suggest that a segmentation algorithm using a 40% of maximum threshold is acceptable for texture analysis of 18 F-FDG PET in NSCLC.
A new model for the determination of limb segment mass in children.
Kuemmerle-Deschner, J B; Hansmann, S; Rapp, H; Dannecker, G E
2007-04-01
The knowledge of limb segment masses is critical for the calculation of joint torques. Several methods for segment mass estimation have been described in the literature. They are either inaccurate or not applicable to the limb segments of children. Therefore, we developed a new cylinder brick model (CBM) to estimate segment mass in children. The aim of this study was to compare CBM and a model based on a polynomial regression equation (PRE) to volume measurement obtained by the water displacement method (WDM). We examined forearms, hands, lower legs, and feet of 121 children using CBM, PRE, and WDM. The differences between CBM and WDM or PRE and WDM were calculated and compared using a Bland-Altman plot of differences. Absolute limb segment mass measured by WDM ranged from 0.16+/-0.04 kg for hands in girls 5-6 years old, up to 2.72+/-1.03 kg for legs in girls 11-12 years old. The differences of normalised segment masses ranged from 0.0002+/-0.0021 to 0.0011+/-0.0036 for CBM-WDM and from 0.0023+/-0.0041 to 0.0127+/-0.036 for PRE-WDM (values are mean+/-2 S.D.). The CBM showed better agreement with WDM than PRE for all limb segments in girls and boys. CBM is accurate and superior to PRE for the estimation of individual limb segment mass of children. Therefore, CBM is a practical and useful tool for the analysis of kinetic parameters and the calculation of resulting forces to assess joint functionality in children.
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter
1993-01-01
This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Prostate segmentation in MR images using discriminant boundary features.
Yang, Meijuan; Li, Xuelong; Turkbey, Baris; Choyke, Peter L; Yan, Pingkun
2013-02-01
Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura
2016-01-01
The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.
Cunningham, Charles E.; Walker, John R.; Eastwood, John D.; Westra, Henny; Rimas, Heather; Chen, Yvonne; Marcus, Madalyn; Swinson, Richard P.; Bracken, Keyna
2013-01-01
Although most young adults with mood and anxiety disorders do not seek treatment, those who are better informed about mental health problems are more likely to use services. The authors used conjoint analysis to model strategies for providing information about anxiety and depression to young adults. Participants (N = 1,035) completed 17 choice tasks presenting combinations of 15 four-level attributes of a mental health information strategy. Latent class analysis yielded 3 segments. The virtual segment (28.7%) preferred working independently on the Internet to obtain information recommended by young adults who had experienced anxiety or depression. Self-assessment options and links to service providers were more important to this segment. Conventional participants (30.1%) preferred books or pamphlets recommended by a doctor, endorsed by mental health professionals, and used with a doctor's support. They would devote more time to information acquisition but were less likely to use Internet social networking options. Brief sources of information were more important to the low interest segment (41.2%). All segments preferred information about alternative ways to reduce anxiety or depression rather than psychological approaches or medication. Maximizing the use of information requires active and passive approaches delivered through old-media (e.g. books) and new-media (e.g., Internet) channels. PMID:24266450
Cunningham, Charles E; Walker, John R; Eastwood, John D; Westra, Henny; Rimas, Heather; Chen, Yvonne; Marcus, Madalyn; Swinson, Richard P; Bracken, Keyna; The Mobilizing Minds Research Group
2014-04-01
Although most young adults with mood and anxiety disorders do not seek treatment, those who are better informed about mental health problems are more likely to use services. The authors used conjoint analysis to model strategies for providing information about anxiety and depression to young adults. Participants (N = 1,035) completed 17 choice tasks presenting combinations of 15 four-level attributes of a mental health information strategy. Latent class analysis yielded 3 segments. The virtual segment (28.7%) preferred working independently on the Internet to obtain information recommended by young adults who had experienced anxiety or depression. Self-assessment options and links to service providers were more important to this segment. Conventional participants (30.1%) preferred books or pamphlets recommended by a doctor, endorsed by mental health professionals, and used with a doctor's support. They would devote more time to information acquisition but were less likely to use Internet social networking options. Brief sources of information were more important to the low interest segment (41.2%). All segments preferred information about alternative ways to reduce anxiety or depression rather than psychological approaches or medication. Maximizing the use of information requires active and passive approaches delivered through old-media (e.g., books) and new-media (e.g., Internet) channels.
Lee, Hyunkwang; Troschel, Fabian M; Tajmir, Shahein; Fuchs, Georg; Mario, Julia; Fintelmann, Florian J; Do, Synho
2017-08-01
Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an "eyeball test" to assess whether patients will tolerate major surgery or chemotherapy, "eyeballing" is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.
Samuels, David C.; Boys, Richard J.; Henderson, Daniel A.; Chinnery, Patrick F.
2003-01-01
We applied a hidden Markov model segmentation method to the human mitochondrial genome to identify patterns in the sequence, to compare these patterns to the gene structure of mtDNA and to see whether these patterns reveal additional characteristics important for our understanding of genome evolution, structure and function. Our analysis identified three segmentation categories based upon the sequence transition probabilities. Category 2 segments corresponded to the tRNA and rRNA genes, with a greater strand-symmetry in these segments. Category 1 and 3 segments covered the protein- coding genes and almost all of the non-coding D-loop. Compared to category 1, the mtDNA segments assigned to category 3 had much lower guanine abundance. A comparison to two independent databases of mitochondrial mutations and polymorphisms showed that the high substitution rate of guanine in human mtDNA is largest in the category 3 segments. Analysis of synonymous mutations showed the same pattern. This suggests that this heterogeneity in the mutation rate is partly independent of respiratory chain function and is a direct property of the genome sequence itself. This has important implications for our understanding of mtDNA evolution and its use as a ‘molecular clock’ to determine the rate of population and species divergence. PMID:14530452
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Size of the Dynamic Bead in Polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agapov, Alexander L; Sokolov, Alexei P
2010-01-01
Presented analysis of neutron, mechanical, and MD simulation data available in the literature demonstrates that the dynamic bead size (the smallest subchain that still exhibits the Rouse-like dynamics) in most of the polymers is significantly larger than the traditionally defined Kuhn segment. Moreover, our analysis emphasizes that even the static bead size (e.g., chain statistics) disagrees with the Kuhn segment length. We demonstrate that the deficiency of the Kuhn segment definition is based on the assumption of a chain being completely extended inside a single bead. The analysis suggests that representation of a real polymer chain by the bead-and-spring modelmore » with a single parameter C cannot be correct. One needs more parameters to reflect correctly details of the chain structure in the bead-and-spring model.« less
Jouve, R; Puddu, P E; Langlet, F; Lanti, M; Guillen, J C; Rolland, P H; Serradimigni, A
1988-01-01
Multivariate analysis of survival using Cox's proportional hazards model demonstrates that several clinically measurable covariates are determinants of life-threatening arrhythmias following left circumflex coronary artery occlusion-reperfusion in 107 dogs. These are heart rate, ST segment elevation and mean aortic pressure immediately (3 min) following occlusion, and the presence of early (0-10 min) post-occlusion sustained ventricular tachycardia. The risk of occlusion-reperfusion ventricular fibrillation was determined according to Cox's solution based on ST segment elevation, thus enabling quantification of the role of cicletanine. Since cicletanine-treated dogs had reduced mean ST segment elevation at 3 min post-occlusion, lower incidence of early post-occlusion (0-10 min) sustained ventricular tachycardia, and increased endogenous production of prostacyclin, and the latter was inversely correlated with the level of ST segment elevation, it is concluded that such favourable effects on the ischaemic myocardium were contributory to the improved outcome in these experiments. These effects on the ischaemic myocardium obtained in spite of a hypotensive action in the experimental setting might be regarded as desirable and it is therefore suggested that they should be further investigated by pharmacodynamic studies in human subjects.
Wörz, Stefan; Schenk, Jens-Peter; Alrajab, Abdulsattar; von Tengg-Kobligk, Hendrik; Rohr, Karl; Arnold, Raoul
2016-10-17
Coarctation of the aorta is one of the most common congenital heart diseases. Despite different treatment opportunities, long-term outcome after surgical or interventional therapy is diverse. Serial morphologic follow-up of vessel growth is necessary, because vessel growth cannot be predicted by primer morphology or a therapeutic option. For the analysis of the long-term outcome after therapy of congenital diseases such as aortic coarctation, accurate 3D geometric analysis of the aorta from follow-up 3D medical image data such as magnetic resonance angiography (MRA) is important. However, for an objective, fast, and accurate 3D geometric analysis, an automatic approach for 3D segmentation and quantification of the aorta from pediatric images is required. We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model that requires only relatively few model parameters. Moreover, we include a novel adaptive background-masking scheme used for least-squares model fitting, we use a spatial normalization scheme to align the segmentation results from follow-up examinations, and we determine relevant 3D geometric parameters of the aortic arch. We have evaluated our proposed approach using different 3D synthetic images. Moreover, we have successfully applied the approach to follow-up pediatric 3D MRA image data, we have normalized the 3D segmentation results of follow-up images of individual patients, and we have combined the results of all patients. We also present a quantitative evaluation of our approach for four follow-up 3D MRA images of a patient, which confirms that our approach yields accurate 3D segmentation results. An experimental comparison with two previous approaches demonstrates that our approach yields superior results. From the results, we found that our approach is well suited for the quantification of the 3D geometry of the aortic arch from follow-up pediatric 3D MRA image data. In future work, this will enable to investigate the long-term outcome of different surgical and interventional therapies for aortic coarctation.
NASA Technical Reports Server (NTRS)
Egolf, T. A.; Landgrebe, A. J.
1982-01-01
A user's manual is provided which includes the technical approach for the Prescribed Wake Rotor Inflow and Flow Field Prediction Analysis. The analysis is used to provide the rotor wake induced velocities at the rotor blades for use in blade airloads and response analyses and to provide induced velocities at arbitrary field points such as at a tail surface. This analysis calculates the distribution of rotor wake induced velocities based on a prescribed wake model. Section operating conditions are prescribed from blade motion and controls determined by a separate blade response analysis. The analysis represents each blade by a segmented lifting line, and the rotor wake by discrete segmented trailing vortex filaments. Blade loading and circulation distributions are calculated based on blade element strip theory including the local induced velocity predicted by the numerical integration of the Biot-Savart Law applied to the vortex wake model.
CFD Analysis of Coolant Flow in VVER-440 Fuel Assemblies with the Code ANSYS CFX 10.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Sandor; Legradi, Gabor; Aszodi, Attila
2006-07-01
From the aspect of planning the power upgrading of nuclear reactors - including the VVER-440 type reactor - it is essential to get to know the flow field in the fuel assembly. For this purpose we have developed models of the fuel assembly of the VVER-440 reactor using the ANSYS CFX 10.0 CFD code. At first a 240 mm long part of a 60 degrees segment of the fuel pin bundle was modelled. Implementing this model a sensitivity study on the appropriate meshing was performed. Based on the development of the above described model, further models were developed: a 960more » mm long part of a 60-degree-segment and a full length part (2420 mm) of the fuel pin bundle segment. The calculations were run using constant coolant properties and several turbulence models. The impacts of choosing different turbulence models were investigated. The results of the above-mentioned investigations are presented in this paper. (authors)« less
Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.
Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G
2015-06-01
Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The agreement against the CTA segmentations was slightly lower with a median Dice coefficient between 85% and 57%. In this work, we successfully showed the accuracy and robustness of the proposed multi-cavity segmentation scheme. This is a promising development for intraoperative procedure guidance, e.g., in cardiac electrophysiology.
Lee, Dong Yeon; Seo, Sang Gyo; Kim, Eo Jin; Kim, Sung Ju; Lee, Kyoung Min; Farber, Daniel C; Chung, Chin Youb; Choi, In Ho
2015-01-01
Radiographic examination is a widely used evaluation method in the orthopedic clinic. However, conventional radiography alone does not reflect the dynamic changes between foot and ankle segments during gait. Multiple 3-dimensional multisegment foot models (3D MFMs) have been introduced to evaluate intersegmental motion of the foot. In this study, we evaluated the correlation between static radiographic indices and intersegmental foot motion indices. One hundred twenty-five females were tested. Static radiographs of full-leg and anteroposterior (AP) and lateral foot views were performed. For hindfoot evaluation, we measured the AP tibiotalar angle (TiTA), talar tilt (TT), calcaneal pitch, lateral tibiocalcaneal angle, and lateral talcocalcaneal angle. For the midfoot segment, naviculocuboid overlap and talonavicular coverage angle were calculated. AP and lateral talo-first metatarsal angles and metatarsal stacking angle (MSA) were measured to assess the forefoot. Hallux valgus angle (HVA) and hallux interphalangeal angle were measured. In gait analysis by 3D MFM, intersegmental angle (ISA) measurements of each segment (hallux, forefoot, hindfoot, arch) were recorded. ISAs at midstance phase were most highly correlated with radiography. Significant correlations were observed between ISA measurements using MFM and static radiographic measurements in the same segment. In the hindfoot, coronal plane ISA was correlated with AP TiTA (P < .001) and TT (P = .018). In the hallux, HVA was strongly correlated with transverse ISA of the hallux (P < .001). The segmental foot motion indices at midstance phase during gait measured by 3D MFM gait analysis were correlated with the conventional radiographic indices. The observed correlation between MFM measurements at midstance phase during gait and static radiographic measurements supports the fundamental basis for the use of MFM in analysis of dynamic motion of foot segment during gait. © The Author(s) 2014.
Large data series: Modeling the usual to identify the unusual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, D.J.; Fedorov, V.V.; Lawkins, W.F.
{open_quotes}Standard{close_quotes} approaches such as regression analysis, Fourier analysis, Box-Jenkins procedure, et al., which handle a data series as a whole, are not useful for very large data sets for at least two reasons. First, even with computer hardware available today, including parallel processors and storage devices, there are no effective means for manipulating and analyzing gigabyte, or larger, data files. Second, in general it can not be assumed that a very large data set is {open_quotes}stable{close_quotes} by the usual measures, like homogeneity, stationarity, and ergodicity, that standard analysis techniques require. Both reasons dictate the necessity to use {open_quotes}local{close_quotes} data analysismore » methods whereby the data is segmented and ordered, where order leads to a sense of {open_quotes}neighbor,{close_quotes} and then analyzed segment by segment. The idea of local data analysis is central to the study reported here.« less
NASA Astrophysics Data System (ADS)
Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin
2009-02-01
Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.
Contextually guided very-high-resolution imagery classification with semantic segments
NASA Astrophysics Data System (ADS)
Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.
2017-10-01
Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Understanding the optics to aid microscopy image segmentation.
Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei
2010-01-01
Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.
Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.
Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C
2017-07-01
To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.
Results of Large Area Crop Inventory Experiment (LACIE) drought analysis (South Dakota drought 1976)
NASA Technical Reports Server (NTRS)
Thompson, D. R.
1976-01-01
LACIE using techniques developed from the southern Great Plains drought analysis indicated the potential for drought damage in South Dakota. This potential was monitored and as it became apparent that a drought was developing, LACIE implemented some of the procedures used in the southern Great Plains drought. The technical approach used in South Dakota involved the normal use of LACIE sample segments (5 x 6 nm) every 18 days. Full frame color transparencies (100 x 100 nm) were used on 9 day intervals to identify the drought area and to track overtime. The green index number (GIN) developed using the Kauth transformation was computed for all South Dakota segments and selected North Dakota segments. A scheme for classifying segments as drought affected or not affected was devised and tested on all available 1976 South Dakota data. Yield model simulations were run for all CRD's Crop Reporting District) in South Dakota.
Validation of a dynamic linked segment model to calculate joint moments in lifting.
de Looze, M P; Kingma, I; Bussmann, J B; Toussaint, H M
1992-08-01
A two-dimensional dynamic linked segment model was constructed and applied to a lifting activity. Reactive forces and moments were calculated by an instantaneous approach involving the application of Newtonian mechanics to individual adjacent rigid segments in succession. The analysis started once at the feet and once at a hands/load segment. The model was validated by comparing predicted external forces and moments at the feet or at a hands/load segment to actual values, which were simultaneously measured (ground reaction force at the feet) or assumed to be zero (external moments at feet and hands/load and external forces, beside gravitation, at hands/load). In addition, results of both procedures, in terms of joint moments, including the moment at the intervertebral disc between the fifth lumbar and first sacral vertebra (L5-S1), were compared. A correlation of r = 0.88 between calculated and measured vertical ground reaction forces was found. The calculated external forces and moments at the hands showed only minor deviations from the expected zero level. The moments at L5-S1, calculated starting from feet compared to starting from hands/load, yielded a coefficient of correlation of r = 0.99. However, moments calculated from hands/load were 3.6% (averaged values) and 10.9% (peak values) higher. This difference is assumed to be due mainly to erroneous estimations of the positions of centres of gravity and joint rotation centres. The estimation of the location of L5-S1 rotation axis can affect the results significantly. Despite the numerous studies estimating the load on the low back during lifting on the basis of linked segment models, only a few attempts to validate these models have been made. This study is concerned with the validity of the presented linked segment model. The results support the model's validity. Effects of several sources of error threatening the validity are discussed. Copyright © 1992. Published by Elsevier Ltd.
Analysis of swallowing sounds using hidden Markov models.
Aboofazeli, Mohammad; Moussavi, Zahra
2008-04-01
In recent years, acoustical analysis of the swallowing mechanism has received considerable attention due to its diagnostic potentials. This paper presents a hidden Markov model (HMM) based method for the swallowing sound segmentation and classification. Swallowing sound signals of 15 healthy and 11 dysphagic subjects were studied. The signals were divided into sequences of 25 ms segments each of which were represented by seven features. The sequences of features were modeled by HMMs. Trained HMMs were used for segmentation of the swallowing sounds into three distinct phases, i.e., initial quiet period, initial discrete sounds (IDS) and bolus transit sounds (BTS). Among the seven features, accuracy of segmentation by the HMM based on multi-scale product of wavelet coefficients was higher than that of the other HMMs and the linear prediction coefficient (LPC)-based HMM showed the weakest performance. In addition, HMMs were used for classification of the swallowing sounds of healthy subjects and dysphagic patients. Classification accuracy of different HMM configurations was investigated. When we increased the number of states of the HMMs from 4 to 8, the classification error gradually decreased. In most cases, classification error for N=9 was higher than that of N=8. Among the seven features used, root mean square (RMS) and waveform fractal dimension (WFD) showed the best performance in the HMM-based classification of swallowing sounds. When the sequences of the features of IDS segment were modeled separately, the accuracy reached up to 85.5%. As a second stage classification, a screening algorithm was used which correctly classified all the subjects but one healthy subject when RMS was used as characteristic feature of the swallowing sounds and the number of states was set to N=8.
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.
Brain segmentation and forebrain development in amniotes.
Puelles, L
2001-08-01
This essay contains a general introduction to the segmental paradigm postulated for interpreting morphologically cellular and molecular data on the developing forebrain of vertebrates. The introduction examines the nature of the problem, indicating the role of topological analysis in conjunction with analysis of various developmental cell processes in the developing brain. Another section explains how morphological analysis in essence depends on assumptions (paradigms), which should be reasonable and well founded in other research, but must remain tentative until time reveals their necessary status as facts for evolving theories (or leads to their substitution by alternative assumptions). The chosen paradigm affects many aspects of the analysis, including the sectioning planes one wants to use and the meaning of what one sees in brain sections. Dorsoventral patterning is presented as the fundament for defining what is longitudinal, whereas less well-understood anteroposterior patterning results from transversal regionalization. The concept of neural segmentation is covered, first historically, and then step by step, explaining the prosomeric model in basic detail, stopping at the diencephalon, the extratelencephalic secondary prosencephalon, and the telencephalon. A new pallial model for telencephalic development and evolution is presented as well, updating the proposed homologies between the sauropsidian and mammalian telencephalon.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie
2017-08-01
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.
Barba-J, Leiner; Escalante-Ramírez, Boris; Vallejo Venegas, Enrique; Arámbula Cosío, Fernando
2018-05-01
Analysis of cardiac images is a fundamental task to diagnose heart problems. Left ventricle (LV) is one of the most important heart structures used for cardiac evaluation. In this work, we propose a novel 3D hierarchical multiscale segmentation method based on a local active contour (AC) model and the Hermite transform (HT) for LV analysis in cardiac magnetic resonance (MR) and computed tomography (CT) volumes in short axis view. Features such as directional edges, texture, and intensities are analyzed using the multiscale HT space. A local AC model is configured using the HT coefficients and geometrical constraints. The endocardial and epicardial boundaries are used for evaluation. Segmentation of the endocardium is controlled using elliptical shape constraints. The final endocardial shape is used to define the geometrical constraints for segmentation of the epicardium. We follow the assumption that epicardial and endocardial shapes are similar in volumes with short axis view. An initialization scheme based on a fuzzy C-means algorithm and mathematical morphology was designed. The algorithm performance was evaluated using cardiac MR and CT volumes in short axis view demonstrating the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Dynamic response of composite beams with induced-strain actuators
NASA Astrophysics Data System (ADS)
Chandra, Ramesh
1994-05-01
This paper presents an analytical-experimental study on dynamic response of open-section composite beams with actuation by piezoelectric devices. The analysis includes the essential features of open-section composite beam modeling, such as constrained warping and transverse shear deformation. A general plate segment of the beam with and without piezoelectric ply is modeled using laminated plate theory and the forces and displacement relations of this plate segment are then reduced to the force and displacement of the one-dimensional beam. The dynamic response of bending-torsion coupled composite beams excited by piezoelectric devices is predicted. In order to validate the analysis, kevlar-epoxy and graphite-epoxy beams with surface mounted pieziceramic actuators are tested for their dynamic response. The response was measured using accelerometer. Good correlation between analysis and experiment is achieved.
SEQassembly: A Practical Tools Program for Coding Sequences Splicing
NASA Astrophysics Data System (ADS)
Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming
CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.
Open-source software platform for medical image segmentation applications
NASA Astrophysics Data System (ADS)
Namías, R.; D'Amato, J. P.; del Fresno, M.
2017-11-01
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models.
Martindale, Christine F; Hoenig, Florian; Strohrmann, Christina; Eskofier, Bjoern M
2017-10-13
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, 'in the wild' data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique.
An application of a vulnerability index to oil spill modeling in the Gulf of Mexico
LaBelle, R.P.; Rainey, Gail; Lanfear, K.J.
1982-01-01
An analysis was made of the relative impact to the shoreline of the Gulf of Mexico from proposed Federal Outer Continental Shelf oil and gas leasing activity. An oil spill trajectory model was coupled with a land segment vulnerability characterization to predict the risks to the shoreline. Such a technique allows spatial and temporal variability in oil spill sensitivity to be represented and combined with the likelihood of oil spill contact to specific coastal segments in the study area. Predicted relative impact was greatest along the coastlines of Louisiana, Mississippi, and Alabama. Useful information is provided for environmental impact analysis, as well as oil spill response planning.
Toffanin, T; Nifosì, F; Follador, H; Passamani, A; Zonta, F; Ferri, G; Scanarini, M; Amistà, P; Pigato, G; Scaroni, C; Mantero, F; Carollo, C; Perini, G I
2011-01-01
Several preclinical studies have demonstrated neuronal effects of glucocorticoids on the hippocampus (HC), a limbic structure with anterior-posterior anatomical and functional segmentation. We propose a volumetric magnetic resonance imaging analysis of hippocampus head (HH), body (HB) and tail (HT) using Cushing's disease (CD) as model, to investigate whether there is a differential sensitivity to glucocorticoid neuronal damage in these segments. We found a significant difference in the HH bilaterally after 12 months from trans-sphenoidal surgical selective resection of the adrenocorticotropic hormone (ACTH)-secreting pituitary micro-adenomas. This pre-post surgery difference could contribute to better understand the pathopysiology of CD as an in vivo model for stress-related hypercortisolemic neuropsychiatric disorders. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Chuang, Bo-I; Kuo, Li-Chieh; Yang, Tai-Hua; Su, Fong-Chin; Jou, I-Ming; Lin, Wei-Jr; Sun, Yung-Nien
2017-01-01
Trigger finger has become a prevalent disease that greatly affects occupational activity and daily life. Ultrasound imaging is commonly used for the clinical diagnosis of trigger finger severity. Due to image property variations, traditional methods cannot effectively segment the finger joint’s tendon structure. In this study, an adaptive texture-based active shape model method is used for segmenting the tendon and synovial sheath. Adapted weights are applied in the segmentation process to adjust the contribution of energy terms depending on image characteristics at different positions. The pathology is then determined according to the wavelet and co-occurrence texture features of the segmented tendon area. In the experiments, the segmentation results have fewer errors, with respect to the ground truth, than contours drawn by regular users. The mean values of the absolute segmentation difference of the tendon and synovial sheath are 3.14 and 4.54 pixels, respectively. The average accuracy of pathological determination is 87.14%. The segmentation results are all acceptable in data of both clear and fuzzy boundary cases in 74 images. And the symptom classifications of 42 cases are also a good reference for diagnosis according to the expert clinicians’ opinions. PMID:29077737
Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie
2017-03-01
Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.
Kinematic foot types in youth with equinovarus secondary to hemiplegia.
Krzak, Joseph J; Corcos, Daniel M; Damiano, Diane L; Graf, Adam; Hedeker, Donald; Smith, Peter A; Harris, Gerald F
2015-02-01
Elevated kinematic variability of the foot and ankle segments exists during gait among individuals with equinovarus secondary to hemiplegic cerebral palsy (CP). Clinicians have previously addressed such variability by developing classification schemes to identify subgroups of individuals based on their kinematics. To identify kinematic subgroups among youth with equinovarus secondary to CP using 3-dimensional multi-segment foot and ankle kinematics during locomotion as inputs for principal component analysis (PCA), and K-means cluster analysis. In a single assessment session, multi-segment foot and ankle kinematics using the Milwaukee Foot Model (MFM) were collected in 24 children/adolescents with equinovarus and 20 typically developing children/adolescents. PCA was used as a data reduction technique on 40 variables. K-means cluster analysis was performed on the first six principal components (PCs) which accounted for 92% of the variance of the dataset. The PCs described the location and plane of involvement in the foot and ankle. Five distinct kinematic subgroups were identified using K-means clustering. Participants with equinovarus presented with variable involvement ranging from primary hindfoot or forefoot deviations to deformtiy that included both segments in multiple planes. This study provides further evidence of the variability in foot characteristics associated with equinovarus secondary to hemiplegic CP. These findings would not have been detected using a single segment foot model. The identification of multiple kinematic subgroups with unique foot and ankle characteristics has the potential to improve treatment since similar patients within a subgroup are likely to benefit from the same intervention(s). Copyright © 2014 Elsevier B.V. All rights reserved.
Kinematic foot types in youth with equinovarus secondary to hemiplegia
Krzak, Joseph J.; Corcos, Daniel M.; Damiano, Diane L.; Graf, Adam; Hedeker, Donald; Smith, Peter A.; Harris, Gerald F.
2015-01-01
Background Elevated kinematic variability of the foot and ankle segments exists during gait among individuals with equinovarus secondary to hemiplegic cerebral palsy (CP). Clinicians have previously addressed such variability by developing classification schemes to identify subgroups of individuals based on their kinematics. Objective To identify kinematic subgroups among youth with equinovarus secondary to CP using 3-dimensional multi-segment foot and ankle kinematics during locomotion as inputs for principal component analysis (PCA), and K-means cluster analysis. Methods In a single assessment session, multi-segment foot and ankle kinematics using the Milwaukee Foot Model (MFM) were collected in 24 children/adolescents with equinovarus and 20 typically developing children/adolescents. Results PCA was used as a data reduction technique on 40 variables. K-means cluster analysis was performed on the first six principal components (PCs) which accounted for 92% of the variance of the dataset. The PCs described the location and plane of involvement in the foot and ankle. Five distinct kinematic subgroups were identified using K-means clustering. Participants with equinovarus presented with variable involvement ranging from primary hindfoot or forefoot deviations to deformtiy that included both segments in multiple planes. Conclusion This study provides further evidence of the variability in foot characteristics associated with equinovarus secondary to hemiplegic CP. These findings would not have been detected using a single segment foot model. The identification of multiple kinematic subgroups with unique foot and ankle characteristics has the potential to improve treatment since similar patients within a subgroup are likely to benefit from the same intervention(s). PMID:25467429
Automated analysis of brain activity for seizure detection in zebrafish models of epilepsy.
Hunyadi, Borbála; Siekierska, Aleksandra; Sourbron, Jo; Copmans, Daniëlle; de Witte, Peter A M
2017-08-01
Epilepsy is a chronic neurological condition, with over 30% of cases unresponsive to treatment. Zebrafish larvae show great potential to serve as an animal model of epilepsy in drug discovery. Thanks to their high fecundity and relatively low cost, they are amenable to high-throughput screening. However, the assessment of seizure occurrences in zebrafish larvae remains a bottleneck, as visual analysis is subjective and time-consuming. For the first time, we present an automated algorithm to detect epileptic discharges in single-channel local field potential (LFP) recordings in zebrafish. First, candidate seizure segments are selected based on their energy and length. Afterwards, discriminative features are extracted from each segment. Using a labeled dataset, a support vector machine (SVM) classifier is trained to learn an optimal feature mapping. Finally, this SVM classifier is used to detect seizure segments in new signals. We tested the proposed algorithm both in a chemically-induced seizure model and a genetic epilepsy model. In both cases, the algorithm delivered similar results to visual analysis and found a significant difference in number of seizures between the epileptic and control group. Direct comparison with multichannel techniques or methods developed for different animal models is not feasible. Nevertheless, a literature review shows that our algorithm outperforms state-of-the-art techniques in terms of accuracy, precision and specificity, while maintaining a reasonable sensitivity. Our seizure detection system is a generic, time-saving and objective method to analyze zebrafish LPF, which can replace visual analysis and facilitate true high-throughput studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
Atlas-Guided Segmentation of Vervet Monkey Brain MRI
Fedorov, Andriy; Li, Xiaoxing; Pohl, Kilian M; Bouix, Sylvain; Styner, Martin; Addicott, Merideth; Wyatt, Chris; Daunais, James B; Wells, William M; Kikinis, Ron
2011-01-01
The vervet monkey is an important nonhuman primate model that allows the study of isolated environmental factors in a controlled environment. Analysis of monkey MRI often suffers from lower quality images compared with human MRI because clinical equipment is typically used to image the smaller monkey brain and higher spatial resolution is required. This, together with the anatomical differences of the monkey brains, complicates the use of neuroimage analysis pipelines tuned for human MRI analysis. In this paper we developed an open source image analysis framework based on the tools available within the 3D Slicer software to support a biological study that investigates the effect of chronic ethanol exposure on brain morphometry in a longitudinally followed population of male vervets. We first developed a computerized atlas of vervet monkey brain MRI, which was used to encode the typical appearance of the individual brain structures in MRI and their spatial distribution. The atlas was then used as a spatial prior during automatic segmentation to process two longitudinal scans per subject. Our evaluation confirms the consistency and reliability of the automatic segmentation. The comparison of atlas construction strategies reveals that the use of a population-specific atlas leads to improved accuracy of the segmentation for subcortical brain structures. The contribution of this work is twofold. First, we describe an image processing workflow specifically tuned towards the analysis of vervet MRI that consists solely of the open source software tools. Second, we develop a digital atlas of vervet monkey brain MRIs to enable similar studies that rely on the vervet model. PMID:22253661
A two-stage method for microcalcification cluster segmentation in mammography by deformable models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.
Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less
Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl
2016-08-01
The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Implementation of an interactive liver surgery planning system
NASA Astrophysics Data System (ADS)
Wang, Luyao; Liu, Jingjing; Yuan, Rong; Gu, Shuguo; Yu, Long; Li, Zhitao; Li, Yanzhao; Li, Zhen; Xie, Qingguo; Hu, Daoyu
2011-03-01
Liver tumor, one of the most wide-spread diseases, has a very high mortality in China. To improve success rates of liver surgeries and life qualities of such patients, we implement an interactive liver surgery planning system based on contrastenhanced liver CT images. The system consists of five modules: pre-processing, segmentation, modeling, quantitative analysis and surgery simulation. The Graph Cuts method is utilized to automatically segment the liver based on an anatomical prior knowledge that liver is the biggest organ and has almost homogeneous gray value. The system supports users to build patient-specific liver segment and sub-segment models using interactive portal vein branch labeling, and to perform anatomical resection simulation. It also provides several tools to simulate atypical resection, including resection plane, sphere and curved surface. To match actual surgery resections well and simulate the process flexibly, we extend our work to develop a virtual scalpel model and simulate the scalpel movement in the hepatic tissue using multi-plane continuous resection. In addition, the quantitative analysis module makes it possible to assess the risk of a liver surgery. The preliminary results show that the system has the potential to offer an accurate 3D delineation of the liver anatomy, as well as the tumors' location in relation to vessels, and to facilitate liver resection surgeries. Furthermore, we are testing the system in a full-scale clinical trial.
Oregon Cascades Play Fairway Analysis: Raster Datasets and Models
Adam Brandt
2015-11-15
This submission includes maps of the spatial distribution of basaltic, and felsic rocks in the Oregon Cascades. It also includes a final Play Fairway Analysis (PFA) model, with the heat and permeability composite risk segments (CRS) supplied separately. Metadata for each raster dataset can be found within the zip files, in the TIF images
Timp, Sheila; Karssemeijer, Nico
2004-05-01
Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.
A statistical shape model of the human second cervical vertebra.
Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon
2015-07-01
Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Lee, Noah; Laine, Andrew F; Smith, R Theodore
2007-01-01
Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.
Stress Analysis of Bolted, Segmented Cylindrical Shells Exhibiting Flange Mating-Surface Waviness
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.
2009-01-01
Bolted, segmented cylindrical shells are a common structural component in many engineering systems especially for aerospace launch vehicles. Segmented shells are often needed due to limitations of manufacturing capabilities or transportation issues related to very long, large-diameter cylindrical shells. These cylindrical shells typically have a flange or ring welded to opposite ends so that shell segments can be mated together and bolted to form a larger structural system. As the diameter of these shells increases, maintaining strict fabrication tolerances for the flanges to be flat and parallel on a welded structure is an extreme challenge. Local fit-up stresses develop in the structure due to flange mating-surface mismatch (flange waviness). These local stresses need to be considered when predicting a critical initial flaw size. Flange waviness is one contributor to the fit-up stress state. The present paper describes the modeling and analysis effort to simulate fit-up stresses due to flange waviness in a typical bolted, segmented cylindrical shell. Results from parametric studies are presented for various flange mating-surface waviness distributions and amplitudes.
Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.
Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M
2018-06-01
This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.
Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics.
Moccia, Sara; De Momi, Elena; El Hadji, Sara; Mattos, Leonardo S
2018-05-01
Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task. Copyright © 2018 Elsevier B.V. All rights reserved.
Correction tool for Active Shape Model based lumbar muscle segmentation.
Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio
2015-08-01
In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.
Interactive lesion segmentation with shape priors from offline and online learning.
Shepherd, Tony; Prince, Simon J D; Alexander, Daniel C
2012-09-01
In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand.
Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila
2018-03-27
Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.
Duross, Christopher; Personius, Stephen; Olig, Susan S; Crone, Anthony J.; Hylland, Michael D.; Lund, William R; Schwartz, David P.
2017-01-01
The Wasatch fault (WFZ)—Utah’s longest and most active normal fault—forms a prominent eastern boundary to the Basin and Range Province in northern Utah. To provide paleoseismic data for a Wasatch Front regional earthquake forecast, we synthesized paleoseismic data to define the timing and displacements of late Holocene surface-faulting earthquakes on the central five segments of the WFZ. Our analysis yields revised histories of large (M ~7) surface-faulting earthquakes on the segments, as well as estimates of earthquake recurrence and vertical slip rate. We constrain the timing of four to six earthquakes on each of the central segments, which together yields a history of at least 24 surface-faulting earthquakes since ~6 ka. Using earthquake data for each segment, inter-event recurrence intervals range from about 0.6 to 2.5 kyr, and have a mean of 1.2 kyr. Mean recurrence, based on closed seismic intervals, is ~1.1–1.3 kyr per segment, and when combined with mean vertical displacements per segment of 1.7–2.6 m, yield mean vertical slip rates of 1.3–2.0 mm/yr per segment. These data refine the late Holocene behavior of the central WFZ; however, a significant source of uncertainty is whether structural complexities that define the segments of the WFZ act as hard barriers to ruptures propagating along the fault. Thus, we evaluate fault rupture models including both single-segment and multi-segment ruptures, and define 3–17-km-wide spatial uncertainties in the segment boundaries. These alternative rupture models and segment-boundary zones honor the WFZ paleoseismic data, take into account the spatial and temporal limitations of paleoseismic data, and allow for complex ruptures such as partial-segment and spillover ruptures. Our data and analyses improve our understanding of the complexities in normal-faulting earthquake behavior and provide geological inputs for regional earthquake-probability and seismic hazard assessments.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
Van den Herrewegen, Inge; Cuppens, Kris; Broeckx, Mario; Barisch-Fritz, Bettina; Vander Sloten, Jos; Leardini, Alberto; Peeraer, Louis
2014-08-22
Multi-segmental foot kinematics have been analyzed by means of optical marker-sets or by means of inertial sensors, but never by markerless dynamic 3D scanning (D3DScanning). The use of D3DScans implies a radically different approach for the construction of the multi-segment foot model: the foot anatomy is identified via the surface shape instead of distinct landmark points. We propose a 4-segment foot model consisting of the shank (Sha), calcaneus (Cal), metatarsus (Met) and hallux (Hal). These segments are manually selected on a static scan. To track the segments in the dynamic scan, the segments of the static scan are matched on each frame of the dynamic scan using the iterative closest point (ICP) fitting algorithm. Joint rotations are calculated between Sha-Cal, Cal-Met, and Met-Hal. Due to the lower quality scans at heel strike and toe off, the first and last 10% of the stance phase is excluded. The application of the method to 5 healthy subjects, 6 trials each, shows a good repeatability (intra-subject standard deviations between 1° and 2.5°) for Sha-Cal and Cal-Met joints, and inferior results for the Met-Hal joint (>3°). The repeatability seems to be subject-dependent. For the validation, a qualitative comparison with joint kinematics from a corresponding established marker-based multi-segment foot model is made. This shows very consistent patterns of rotation. The ease of subject preparation and also the effective and easy to interpret visual output, make the present technique very attractive for functional analysis of the foot, enhancing usability in clinical practice. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.
Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin
2016-01-21
An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ± 40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design.
Aging and perceived event structure as a function of modality
Magliano, Joseph; Kopp, Kristopher; McNerney, M. Windy; Radvansky, Gabriel A.; Zacks, Jeffrey M.
2012-01-01
The majority of research on situation model processing in older adults has focused on narrative texts. Much of this research has shown that many important aspects of constructing a situation model for a text are preserved and may even improve with age. However, narratives need not be text-based, and little is known as to whether these findings generalize to visually-based narratives. The present study assessed the impact of story modality on event segmentation, which is a basic component of event comprehension. Older and younger adults viewed picture stories or read text versions of them and segmented them into events. There was comparable alignment between the segmentation judgments and a theoretically guided analysis of shifts in situational features across modalities for both populations. These results suggest that situation models provide older adults with a stable basis for event comprehension across different modalities of expereinces. PMID:22182344
Tierney, Áine P; Callanan, Anthony; McGloughlin, Timothy M
2012-02-01
To investigate the use of regional variations in the mechanical properties of abdominal aortic aneurysms (AAA) in finite element (FE) modeling of AAA rupture risk, which has heretofore assumed homogeneous mechanical tissue properties. Electrocardiogram-gated computed tomography scans from 3 male patients with known infrarenal AAA were used to characterize the behavior of the aneurysm in 4 different segments (posterior, anterior, and left and right lateral) at maximum diameter and above the infrarenal aorta. The elasticity of the aneurysm (circumferential cyclic strain, compliance, and the Hudetz incremental modulus) was calculated for each segment and the aneurysm as a whole. The FE analysis inclusive of prestress (pre-existing tensile stress) produced a detailed stress pattern on each of the aneurysm models under pressure loading. The 4 largest areas of stress in each region were considered in conjunction with the local regional properties of the segment to define a specific regional prestress rupture index (RPRI). In terms of elasticity, there were average reductions of 68% in circumferential cyclic strain and 63% in compliance, with a >5-fold increase in incremental modulus, between the healthy and the aneurysmal aorta for each patient. There were also regional variations in all elastic properties in each individual patient. The average difference in total stress inclusive of prestress was 59%, 67%, and 15%, respectively, for the 3 patients. Comparing the strain from FE models with the CT scans revealed an average difference in strain of 1.55% for the segmented models and 3.61% for the homogeneous models, which suggests that the segmented models more accurately reflect in vivo behavior. RPRI values were calculated for each segment for all patients. A greater understanding of the local material properties and their use in FE models is essential for greater accuracy in rupture prediction. Quantifying the regional behavior will yield insight into the changes in patient-specific aneurysms and increase understanding about the progression of aneurysmal disease.
Bondy, Matthew; Altenhof, William; Chen, Xilin; Snowdon, Anne; Vrkljan, Brenda
2014-01-01
A finite element/multi-body model of a newborn infant has been developed by researchers at the University of Windsor. The geometry of this model is derived from a Nita newborn hospital training mannequin. It consists of 17 parts: eight upper and lower limb segments, the torso, head, and a seven-segment neck with seven translational and eight rotational joints. Anthropometry is consistent with hospital growth charts, measurements requested from health professionals and data from the open literature. The biomechanical properties of the model (i.e. joint stiffnesses) are implementations of data identified in the open literature. The model has been validated with respect to studies of the biomechanics of shaken baby syndrome, infant falls and the Q0 anthropomorphic testing device. A significant conclusion of this study is that the kinetics of the Q0 neck is not biofidelic. This model is currently used in an analysis of airway patency for infants in modern automotive child restraints.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.
Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman
2018-05-16
Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.
Pondering the procephalon: the segmental origin of the labrum.
Haas, M S; Brown, S J; Beeman, R W
2001-02-01
With accumulating evidence for the appendicular nature of the labrum, the question of its actual segmental origin remains. Two existing insect head segmentation models, the linear and S-models, are reviewed, and a new model introduced. The L-/Bent-Y model proposes that the labrum is a fusion of the appendage endites of the intercalary segment and that the stomodeum is tightly integrated into this segment. This model appears to explain a wider variety of insect head segmentation phenomena. Embryological, histological, neurological and molecular evidence supporting the new model is reviewed.
Modeling 4D Pathological Changes by Leveraging Normative Models
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2016-01-01
With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Woo, B; Kim, J
Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stolworthy, Dean K; Zirbel, Shannon A; Howell, Larry L; Samuels, Marina; Bowden, Anton E
2014-05-01
The soft tissues of the spine exhibit sensitivity to strain-rate and temperature, yet current knowledge of spine biomechanics is derived from cadaveric testing conducted at room temperature at very slow, quasi-static rates. The primary objective of this study was to characterize the change in segmental flexibility of cadaveric lumbar spine segments with respect to multiple loading rates within the range of physiologic motion by using specimens at body or room temperature. The secondary objective was to develop a predictive model of spine flexibility across the voluntary range of loading rates. This in vitro study examines rate- and temperature-dependent viscoelasticity of the human lumbar cadaveric spine. Repeated flexibility tests were performed on 21 lumbar function spinal units (FSUs) in flexion-extension with the use of 11 distinct voluntary loading rates at body or room temperature. Furthermore, six lumbar FSUs were loaded in axial rotation, flexion-extension, and lateral bending at both body and room temperature via a stepwise, quasi-static loading protocol. All FSUs were also loaded using a control loading test with a continuous-speed loading-rate of 1-deg/sec. The viscoelastic torque-rotation response for each spinal segment was recorded. A predictive model was developed to accurately estimate spine segment flexibility at any voluntary loading rate based on measured flexibility at a single loading rate. Stepwise loading exhibited the greatest segmental range of motion (ROM) in all loading directions. As loading rate increased, segmental ROM decreased, whereas segmental stiffness and hysteresis both increased; however, the neutral zone remained constant. Continuous-speed tests showed that segmental stiffness and hysteresis are dependent variables to ROM at voluntary loading rates in flexion-extension. To predict the torque-rotation response at different loading rates, the model requires knowledge of the segmental flexibility at a single rate and specified temperature, and a scaling parameter. A Bland-Altman analysis showed high coefficients of determination for the predictive model. The present work demonstrates significant changes in spine segment flexibility as a result of loading rate and testing temperature. Loading rate effects can be accounted for using the predictive model, which accurately estimated ROM, neutral zone, stiffness, and hysteresis within the range of voluntary motion. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Gao, Shan; van 't Klooster, Ronald; Kitslaar, Pieter H; Coolen, Bram F; van den Berg, Alexandra M; Smits, Loek P; Shahzad, Rahil; Shamonin, Denis P; de Koning, Patrick J H; Nederveen, Aart J; van der Geest, Rob J
2017-10-01
The quantification of vessel wall morphology and plaque burden requires vessel segmentation, which is generally performed by manual delineations. The purpose of our work is to develop and evaluate a new 3D model-based approach for carotid artery wall segmentation from dual-sequence MRI. The proposed method segments the lumen and outer wall surfaces including the bifurcation region by fitting a subdivision surface constructed hierarchical-tree model to the image data. In particular, a hybrid segmentation which combines deformable model fitting with boundary classification was applied to extract the lumen surface. The 3D model ensures the correct shape and topology of the carotid artery, while the boundary classification uses combined image information of 3D TOF-MRA and 3D BB-MRI to promote accurate delineation of the lumen boundaries. The proposed algorithm was validated on 25 subjects (48 arteries) including both healthy volunteers and atherosclerotic patients with 30% to 70% carotid stenosis. For both lumen and outer wall border detection, our result shows good agreement between manually and automatically determined contours, with contour-to-contour distance less than 1 pixel as well as Dice overlap greater than 0.87 at all different carotid artery sections. The presented 3D segmentation technique has demonstrated the capability of providing vessel wall delineation for 3D carotid MRI data with high accuracy and limited user interaction. This brings benefits to large-scale patient studies for assessing the effect of pharmacological treatment of atherosclerosis by reducing image analysis time and bias between human observers. © 2017 American Association of Physicists in Medicine.
Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu
2016-12-01
Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Segmentation and Analysis of Stereophotometric Body Surface Data.
1982-04-01
each anterior superior iliac spine. Anthropometry : Study of the physical dimensions of the human body. Articulated Total Body Model: Computer...X2 ) (x- xl ) + YI’ 0) (A.l) with x variable. Let the segmenting plane they are being compared to have a normal vector with components (nI , n2, n3...gives n3 ( z) ( Xl -X 2 ) + nX(XX + Xn 2 (Y-y 2 ) + n2 (Z -Y1 ) ( Xl -X 2 ) (A.3b) n 1 n(X 1-X 2 + n 2(Yl-y 2) Note that since the segmenting plane passes
A kinematic model to assess spinal motion during walking.
Konz, Regina J; Fatone, Stefania; Stine, Rebecca L; Ganju, Aruna; Gard, Steven A; Ondra, Stephen L
2006-11-15
A 3-dimensional multi-segment kinematic spine model was developed for noninvasive analysis of spinal motion during walking. Preliminary data from able-bodied ambulators were collected and analyzed using the model. Neither the spine's role during walking nor the effect of surgical spinal stabilization on gait is fully understood. Typically, gait analysis models disregard the spine entirely or regard it as a single rigid structure. Data on regional spinal movements, in conjunction with lower limb data, associated with walking are scarce. KinTrak software (Motion Analysis Corp., Santa Rosa, CA) was used to create a biomechanical model for analysis of 3-dimensional regional spinal movements. Measuring known angles from a mechanical model and comparing them to the calculated angles validated the kinematic model. Spine motion data were collected from 10 able-bodied adults walking at 5 self-selected speeds. These results were compared to data reported in the literature. The uniaxial angles measured on the mechanical model were within 5 degrees of the calculated kinematic model angles, and the coupled angles were within 2 degrees. Regional spine kinematics from able-bodied subjects calculated with this model compared well to data reported by other authors. A multi-segment kinematic spine model has been developed and validated for analysis of spinal motion during walking. By understanding the spine's role during ambulation and the cause-and-effect relationship between spine motion and lower limb motion, preoperative planning may be augmented to restore normal alignment and balance with minimal negative effects on walking.
Zheng, Yalin; Kwong, Man Ting; MacCormick, Ian J. C.; Beare, Nicholas A. V.; Harding, Simon P.
2014-01-01
Capillary non-perfusion (CNP) in the retina is a characteristic feature used in the management of a wide range of retinal diseases. There is no well-established computation tool for assessing the extent of CNP. We propose a novel texture segmentation framework to address this problem. This framework comprises three major steps: pre-processing, unsupervised total variation texture segmentation, and supervised segmentation. It employs a state-of-the-art multiphase total variation texture segmentation model which is enhanced by new kernel based region terms. The model can be applied to texture and intensity-based multiphase problems. A supervised segmentation step allows the framework to take expert knowledge into account, an AdaBoost classifier with weighted cost coefficient is chosen to tackle imbalanced data classification problems. To demonstrate its effectiveness, we applied this framework to 48 images from malarial retinopathy and 10 images from ischemic diabetic maculopathy. The performance of segmentation is satisfactory when compared to a reference standard of manual delineations: accuracy, sensitivity and specificity are 89.0%, 73.0%, and 90.8% respectively for the malarial retinopathy dataset and 80.8%, 70.6%, and 82.1% respectively for the diabetic maculopathy dataset. In terms of region-wise analysis, this method achieved an accuracy of 76.3% (45 out of 59 regions) for the malarial retinopathy dataset and 73.9% (17 out of 26 regions) for the diabetic maculopathy dataset. This comprehensive segmentation framework can quantify capillary non-perfusion in retinopathy from two distinct etiologies, and has the potential to be adopted for wider applications. PMID:24747681
NASA Astrophysics Data System (ADS)
Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke
2016-04-01
Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.
Filtering and left ventricle segmentation of the fetal heart in ultrasound images
NASA Astrophysics Data System (ADS)
Vargas-Quintero, Lorena; Escalante-Ramírez, Boris
2013-11-01
In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.
Analysis of gene expression levels in individual bacterial cells without image segmentation.
Kwak, In Hae; Son, Minjun; Hagen, Stephen J
2012-05-11
Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly. Copyright © 2012 Elsevier Inc. All rights reserved.
Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T
2018-03-08
Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.
2018-02-01
Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.
NASA Astrophysics Data System (ADS)
Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.
2013-04-01
In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.
Analysis of Activity Patterns and Performance in Polio Survivors
2006-10-01
variable were inspected for asymmetry and long-tailedness and normality. When appropriate, transformations (e.g. log function) were made. Data were...thighs and a combined pelvis -HAT segment was used for our analyses. The ankles were modeled as universal joints, the knees as revolutes, and the...segment, lumped pelvis + HAT, universal ankle, revolute knee, spherical hip; pin at CP entire stance Stance sagittal knee and frontal hip
Garde, Ainara; Dehkordi, Parastoo; Wensley, David; Ansermino, J Mark; Dumont, Guy A
2015-01-01
Obstructive sleep apnea (OSA) disrupts normal ventilation during sleep and can lead to serious health problems in children if left untreated. Polysomnography, the gold standard for OSA diagnosis, is resource intensive and requires a specialized laboratory. Thus, we proposed to use the Phone Oximeter™, a portable device integrating pulse oximetry with a smartphone, to detect OSA events. As a proportion of OSA events occur without oxygen desaturation (defined as SpO2 decreases ≥ 3%), we suggest combining SpO2 and pulse rate variability (PRV) analysis to identify all OSA events and provide a more detailed sleep analysis. We recruited 160 children and recorded pulse oximetry consisting of SpO2 and plethysmography (PPG) using the Phone Oximeter™, alongside standard polysomnography. A sleep technician visually scored all OSA events with and without oxygen desaturation from polysomnography. We divided pulse oximetry signals into 1-min signal segments and extracted several features from SpO2 and PPG analysis in the time and frequency domain. Segments with OSA, especially the ones with oxygen desaturation, presented greater SpO2 variability and modulation reflected in the spectral domain than segments without OSA. Segments with OSA also showed higher heart rate and sympathetic activity through the PRV analysis relative to segments without OSA. PRV analysis was more sensitive than SpO2 analysis for identification of OSA events without oxygen desaturation. Combining SpO2 and PRV analysis enhanced OSA event detection through a multiple logistic regression model. The area under the ROC curve increased from 81% to 87%. Thus, the Phone Oximeter™ might be useful to monitor sleep and identify OSA events with and without oxygen desaturation at home.
Optimized efficiency in InP nanowire solar cells with accurate 1D analysis
NASA Astrophysics Data System (ADS)
Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas
2018-01-01
Semiconductor nanowire arrays are a promising candidate for next generation solar cells due to enhanced absorption and reduced material consumption. However, to optimize their performance, time consuming three-dimensional (3D) opto-electronics modeling is usually performed. Here, we develop an accurate one-dimensional (1D) modeling method for the analysis. The 1D modeling is about 400 times faster than 3D modeling and allows direct application of concepts from planar pn-junctions on the analysis of nanowire solar cells. We show that the superposition principle can break down in InP nanowires due to strong surface recombination in the depletion region, giving rise to an IV-behavior similar to that with low shunt resistance. Importantly, we find that the open-circuit voltage of nanowire solar cells is typically limited by contact leakage. Therefore, to increase the efficiency, we have investigated the effect of high-bandgap GaP carrier-selective contact segments at the top and bottom of the InP nanowire and we find that GaP contact segments improve the solar cell efficiency. Next, we discuss the merit of p-i-n and p-n junction concepts in nanowire solar cells. With GaP carrier selective top and bottom contact segments in the InP nanowire array, we find that a p-n junction design is superior to a p-i-n junction design. We predict a best efficiency of 25% for a surface recombination velocity of 4500 cm s-1, corresponding to a non-radiative lifetime of 1 ns in p-n junction cells. The developed 1D model can be used for general modeling of axial p-n and p-i-n junctions in semiconductor nanowires. This includes also LED applications and we expect faster progress in device modeling using our method.
Optimized efficiency in InP nanowire solar cells with accurate 1D analysis.
Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas
2018-01-26
Semiconductor nanowire arrays are a promising candidate for next generation solar cells due to enhanced absorption and reduced material consumption. However, to optimize their performance, time consuming three-dimensional (3D) opto-electronics modeling is usually performed. Here, we develop an accurate one-dimensional (1D) modeling method for the analysis. The 1D modeling is about 400 times faster than 3D modeling and allows direct application of concepts from planar pn-junctions on the analysis of nanowire solar cells. We show that the superposition principle can break down in InP nanowires due to strong surface recombination in the depletion region, giving rise to an IV-behavior similar to that with low shunt resistance. Importantly, we find that the open-circuit voltage of nanowire solar cells is typically limited by contact leakage. Therefore, to increase the efficiency, we have investigated the effect of high-bandgap GaP carrier-selective contact segments at the top and bottom of the InP nanowire and we find that GaP contact segments improve the solar cell efficiency. Next, we discuss the merit of p-i-n and p-n junction concepts in nanowire solar cells. With GaP carrier selective top and bottom contact segments in the InP nanowire array, we find that a p-n junction design is superior to a p-i-n junction design. We predict a best efficiency of 25% for a surface recombination velocity of 4500 cm s -1 , corresponding to a non-radiative lifetime of 1 ns in p-n junction cells. The developed 1D model can be used for general modeling of axial p-n and p-i-n junctions in semiconductor nanowires. This includes also LED applications and we expect faster progress in device modeling using our method.
Zhao, Y; Zhang, S; Sun, T; Wang, D; Lian, W; Tan, J; Zou, D; Zhao, Y
2013-09-01
To compare the stability of lengthened sacroiliac screw and standard sacroiliac screw for the treatment of unilateral vertical sacral fractures; to provide reference for clinical applications. A finite element model of Tile type C pelvic ring injury (unilateral Denis type II fracture of the sacrum) was produced. The unilateral sacral fractures were fixed with lengthened sacroiliac screw and sacroiliac screw in six different types of models respectively. The translation and angle displacement of the superior surface of the sacrum (in standing position on both feet) were measured and compared. The stability of one lengthened sacroiliac screw fixation in S1 or S2 segment is superior to that of one sacroiliac screw fixation in the same sacral segment. The stability of one lengthened sacroiliac screw fixation in S1 and S2 segments respectively is superior to that of one sacroiliac screw fixation in S1 and S2 segments respectively. The stability of one lengthened sacroiliac screw fixation in S1 and S2 segments respectively is superior to that of one lengthened sacroiliac screw fixation in S1 or S2 segment. The stability of one sacroiliac screw fixation in S1 and S2 segments respectively is markedly superior to that of one sacroiliac screw fixation in S1 or S2 segment. The vertical and rotational stability of lengthened sacroiliac screw fixation and sacroiliac screw fixation in S2 is superior to that of S1. In a finite element model of type C pelvic ring disruption, S1 and S2 lengthened sacroiliac screws should be utilized for the fixation as regularly as possible and the most stable fixation is the combination of the lengthened sacroiliac screws of S1 and S2 segments. Even if lengthened sacroiliac screws cannot be systematically used due to specific conditions, one sacroiliac screw fixation in S1 and S2 segments respectively is recommended. No matter which kind of sacroiliac screw is used, if only one screw can be implanted, the fixation in S2 segment is more recommended than that in S1. Experimental study Level III. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Lim, Won Hee; Park, Eun Woo; Chae, Hwa Sung; Kwon, Soon Man; Jung, Hoi-In; Baek, Seung-Hak
2017-06-01
The purpose of this study was to compare the results of two- (2D) and three-dimensional (3D) measurements for the alveolar molding effect in patients with unilateral cleft lip and palate. The sample consisted of 23 unilateral cleft lip and palate infants treated with nasoalveolar molding (NAM) appliance. Dental models were fabricated at initial visit (T0; mean age, 23.5 days after birth) and after alveolar molding therapy (T1; mean duration, 83 days). For 3D measurement, virtual models were constructed using a laser scanner and 3D software. For 2D measurement, 1:1 ratio photograph images of dental models were scanned by a scanner. After setting of common reference points and lines for 2D and 3D measurements, 7 linear and 5 angular variables were measured at the T0 and T1 stages, respectively. Wilcoxon signed rank test and Bland-Altman analysis were performed for statistical analysis. The alveolar molding effect of the maxilla following NAM treatment was inward bending of the anterior part of greater segment, forward growth of the lesser segment, and decrease in the cleft gap in the greater segment and lesser segment. Two angular variables showed difference in statistical interpretation of the change by NAM treatment between 2D and 3D measurements (ΔACG-BG-PG and ΔACL-BL-PL). However, Bland-Altman analysis did not exhibit significant difference in the amounts of change in these variables between the 2 measurements. These results suggest that the data from 2D measurement could be reliably used in conjunction with that from 3D measurement.
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
Algorithmic structural segmentation of defective particle systems: a lithium-ion battery study.
Westhoff, D; Finegan, D P; Shearing, P R; Schmidt, V
2018-04-01
We describe a segmentation algorithm that is able to identify defects (cracks, holes and breakages) in particle systems. This information is used to segment image data into individual particles, where each particle and its defects are identified accordingly. We apply the method to particle systems that appear in Li-ion battery electrodes. First, the algorithm is validated using simulated data from a stochastic 3D microstructure model, where we have full information about defects. This allows us to quantify the accuracy of the segmentation result. Then we show that the algorithm can successfully be applied to tomographic image data from real battery anodes and cathodes, which are composed of particle systems with very different morpohological properties. Finally, we show how the results of the segmentation algorithm can be used for structural analysis. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Texture segmentation by genetic programming.
Song, Andy; Ciesielski, Vic
2008-01-01
This paper describes a texture segmentation method using genetic programming (GP), which is one of the most powerful evolutionary computation algorithms. By choosing an appropriate representation texture, classifiers can be evolved without computing texture features. Due to the absence of time-consuming feature extraction, the evolved classifiers enable the development of the proposed texture segmentation algorithm. This GP based method can achieve a segmentation speed that is significantly higher than that of conventional methods. This method does not require a human expert to manually construct models for texture feature extraction. In an analysis of the evolved classifiers, it can be seen that these GP classifiers are not arbitrary. Certain textural regularities are captured by these classifiers to discriminate different textures. GP has been shown in this study as a feasible and a powerful approach for texture classification and segmentation, which are generally considered as complex vision tasks.
Optimizing Likelihood Models for Particle Trajectory Segmentation in Multi-State Systems.
Young, Dylan Christopher; Scrimgeour, Jan
2018-06-19
Particle tracking offers significant insight into the molecular mechanics that govern the behav- ior of living cells. The analysis of molecular trajectories that transition between different motive states, such as diffusive, driven and tethered modes, is of considerable importance, with even single trajectories containing significant amounts of information about a molecule's environment and its interactions with cellular structures. Hidden Markov models (HMM) have been widely adopted to perform the segmentation of such complex tracks. In this paper, we show that extensive analysis of hidden Markov model outputs using data derived from multi-state Brownian dynamics simulations can be used both for the optimization of the likelihood models used to describe the states of the system and for characterization of the technique's failure mechanisms. This analysis was made pos- sible by the implementation of parallelized adaptive direct search algorithm on a Nvidia graphics processing unit. This approach provides critical information for the visualization of HMM failure and successful design of particle tracking experiments where trajectories contain multiple mobile states. © 2018 IOP Publishing Ltd.
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang
2014-01-01
Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images. PMID:24989402
Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang
2014-07-01
Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.
Kuwayama, Kenji; Nariai, Maika; Miyaguchi, Hajime; Iwata, Yuko T; Kanamori, Tatsuyuki; Tsujikawa, Kenji; Yamamuro, Tadashi; Segawa, Hiroki; Abe, Hiroko; Iwase, Hirotaro; Inoue, Hiroyuki
2018-07-01
Sleeping aids are often abused in the commission of drug-facilitated crimes. Generally, there is little evidence that a victim ingested a spiked drink unknowingly because the unconscious victim cannot report the situation to the police immediately after the crime occurred. Although conventional segmental hair analysis can estimate the number of months since a targeted drug was ingested, this analysis cannot determine the specific day of ingestion. We recently developed a method of micro-segmental hair analysis using internal temporal markers (ITMs) to estimate the day of drug ingestion. This method was based on volunteer ingestion of ITMs to determine a timescale within individual hair strands, by segmenting a single hair strand at 0.4-mm intervals, corresponding to daily hair growth. This study assessed the ability of this method to estimate the day of ingestion of an over-the-counter sleeping aid, diphenhydramine, which can be easily abused. To model ingestion of a diphenhydramine-spiked drink unknowingly, each subject ingested a dose of diphenhydramine, followed by ingestion of two doses of the ITM, chlorpheniramine, 14days apart. Several hair strands were collected from each subject's scalp several weeks after the second ITM ingestion. Diphenhydramine and ITM were detected at specific regions within individual hair strands. The day of diphenhydramine ingestion was estimated from the distances between the regions and the days of ITM ingestion. The error between estimated and actual ingestion day ranged from -0.1 to 1.9days regardless of subjects and hair collection times. The total time required for micro-segmental analysis of 96 hair segments (hair length: 3.84cm) was approximately 2days and the cost was almost the same as in general drug analysis. This procedure may be applicable to the investigation of crimes facilitated by various drugs. Copyright © 2018 Elsevier B.V. All rights reserved.
Anatomy guided automated SPECT renal seed point estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
DOT National Transportation Integrated Search
2008-12-01
Parapets placed on bridge deck surfaces, commonly known as barriers are purposes omitted from the structural analysis model for design or load rating. Barriers should not be considered primary structural members because they are designed to withstand...
Skeletal maturity determination from hand radiograph by model-based analysis
NASA Astrophysics Data System (ADS)
Vogelsang, Frank; Kohnen, Michael; Schneider, Hansgerd; Weiler, Frank; Kilbinger, Markus W.; Wein, Berthold B.; Guenther, Rolf W.
2000-06-01
Derived from a model based segmentation algorithm for hand radiographs proposed in our former work we now present a method to determine skeletal maturity by an automated analysis of regions of interest (ROI). These ROIs including the epiphyseal and carpal bones, which are most important for skeletal maturity determination, can be extracted out of the radiograph by knowledge based algorithms.
Physics-based deformable organisms for medical image analysis
NASA Astrophysics Data System (ADS)
Hamarneh, Ghassan; McIntosh, Chris
2005-04-01
Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter
2012-09-01
Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.
Djoudi, Farid
2013-01-01
Two separate themes are presented in this paper. The first theme is to present a graphical modeling approach of human anatomical structures namely, the femur and the tibia. The second theme involves making a finite element analysis of stresses, displacements and deformations in prosthetic implants (the femoral implant and the polyethylene insert). The graphical modeling approach comes in two parts. The first is the segmentation of MRI scanned images, retrieved in DICOM format for edge detection. In the second part, 3D-CAD models are generated from the results of the segmentation stage. The finite element analysis is done by first extracting the prosthetic implants from the reconstructed 3D-CAD model, then do a finite element analysis of these implants under objectively determined conditions such as; forces, allowed displacements, the materials composing implant, and the coefficient of friction. The objective of this work is to implement an interface for exchanging data between 2D MRI images obtained from a medical diagnosis of a patient and the 3D-CAD model used in various applications, such as; the extraction of the implants, stress analysis at the knee joint and can serve as an aid to surgery, also predict the behavior of the prosthetic implants vis-a-vis the forces acting on the knee joints.
Probabilistic atlas and geometric variability estimation to drive tissue segmentation.
Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie
2014-09-10
Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.
de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano
2015-06-01
Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.
Kim, Ho-Joong; Kang, Kyoung-Tak; Park, Sung-Cheol; Kwon, Oh-Hyo; Son, Juhyun; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S; Lenke, Lawrence G
2017-05-01
There have been conflicting results on the surgical outcome of lumbar fusion surgery using two different techniques: robot-assisted pedicle screw fixation and conventional freehand technique. In addition, there have been no studies about the biomechanical issues between both techniques. This study aimed to investigate the biomechanical properties in terms of stress at adjacent segments using robot-assisted pedicle screw insertion technique (robot-assisted, minimally invasive posterior lumbar interbody fusion, Rom-PLIF) and freehand technique (conventional, freehand, open approach, posterior lumbar interbody fusion, Cop-PLIF) for instrumented lumbar fusion surgery. This is an additional post-hoc analysis for patient-specific finite element (FE) model. The sample is composed of patients with degenerative lumbar disease. Intradiscal pressure and facet contact force are the outcome measures. Patients were randomly assigned to undergo an instrumented PLIF procedure using a Rom-PLIF (37 patients) or a Cop-PLIF (41), respectively. Five patients in each group were selected using a simple random sampling method after operation, and 10 preoperative and postoperative lumbar spines were modeled from preoperative high-resolution computed tomography of 10 patients using the same method for a validated lumbar spine model. Under four pure moments of 7.5 Nm, the changes in intradiscal pressure and facet joint contact force at the proximal adjacent segment following fusion surgery were analyzed and compared with preoperative states. The representativeness of random samples was verified. Both groups showed significant increases in postoperative intradiscal pressure at the proximal adjacent segment under four moments, compared with the preoperative state. The Cop-PLIF models demonstrated significantly higher percent increments of intradiscal pressure at proximal adjacent segments under extension, lateral bending, and torsion moments than the Rom-PLIF models (p=.032, p=.008, and p=.016, respectively). Furthermore, the percent increment of facet contact force was significantly higher in the Cop-PLIF models under extension and torsion moments than in the Rom-PLIF models (p=.016 under both extension and torsion moments). The present study showed the clinical application of subject-specific FE analysis in the spine. Even though there was biomechanical superiority of the robot-assisted insertions in terms of alleviation of stress increments at adjacent segments after fusion, cautious interpretation is needed because of the small sample size. Copyright © 2016 Elsevier Inc. All rights reserved.
Smith, Rachel A.; Greenberg, Marisa; Parrott, Roxanne L.
2014-01-01
With a growing interest in using genetic information to motivate young adults’ health behaviors, audience segmentation is needed for effective campaign design. Using latent class analysis, this study identifies segments based on young adults’ (N = 327) beliefs about genetic threats to their health and personal efficacy over genetic influences on their health. A four-class model was identified. The model indicators fit the risk perception attitude framework (Rimal & Real, 2003), but the covariates (e.g., current health behaviors) did not. In addition, opinion leader qualities covaried with one profile: those in this profile engaged in fewer preventative behaviors and more dangerous treatment options, and also liked to persuade others, making them a particularly salient group for campaign efforts. The implications for adult-onset disorders, like alpha-1 antitrypsin deficiency are discussed. PMID:24111749
Morphometrics and inertial properties in the body segments of chimpanzees (Pan troglodytes)
Schoonaert, Kirsten; D’Août, Kristiaan; Aerts, Peter
2007-01-01
Inertial characteristics and dimensions of the body and body segments form an integral part of a biomechanical analysis of motion. In primate studies, however, segment inertial parameters of non-human hominoids are scarce and often obtained using varying techniques. Therefore, the principal aim of this study was to expand the existing chimpanzee inertial property data set using a non-invasive measuring technique. We also considered age- and sex-related differences within our sample. By means of a geometric model based on Crompton et al. (1996); Am J Phys Anthropol 99, 547–570) we generated inertial properties using external segment length and diameter measurements of 53 anaesthetized chimpanzees (Pan troglodytes). We report absolute inertial parameters for immature and mature subjects and for males and females separately. Proportional data were computed to allow the comparison between age classes and sex classes. In addition, we calculated whole limb inertial properties and we discuss their potential biomechanical consequences. We found no significant differences between the age classes in the proportional data except for hand and foot measures where juveniles exhibit relatively longer and heavier distal segments than adults. Furthermore, most sex-related differences can be directly attributed to the higher absolute segment masses in male chimpanzees resulting in higher moments of inertia. Additionally, males tend to have longer upper limbs than females. However, regarding proportional data we discuss the general inertial properties of the chimpanzee. The described segment inertial parameters of males and females, and of the two age classes, represent a valuable data set ready for use in a range of biomechanical locomotor models. These models offer great potential for improving our understanding of early hominin locomotor patterns. PMID:17451529
NASA Astrophysics Data System (ADS)
Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.
2014-05-01
Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).
Clayden, Jonathan D; Storkey, Amos J; Muñoz Maniega, Susana; Bastin, Mark E
2009-04-01
This work describes a reproducibility analysis of scalar water diffusion parameters, measured within white matter tracts segmented using a probabilistic shape modelling method. In common with previously reported neighbourhood tractography (NT) work, the technique optimises seed point placement for fibre tracking by matching the tracts generated using a number of candidate points against a reference tract, which is derived from a white matter atlas in the present study. No direct constraints are applied to the fibre tracking results. An Expectation-Maximisation algorithm is used to fully automate the procedure, and make dramatically more efficient use of data than earlier NT methods. Within-subject and between-subject variances for fractional anisotropy and mean diffusivity within the tracts are then separated using a random effects model. We find test-retest coefficients of variation (CVs) similar to those reported in another study using landmark-guided single seed points; and subject to subject CVs similar to a constraint-based multiple ROI method. We conclude that our approach is at least as effective as other methods for tract segmentation using tractography, whilst also having some additional benefits, such as its provision of a goodness-of-match measure for each segmentation.
Segmentation of time series with long-range fractal correlations.
Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P
2012-06-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.
NASA Astrophysics Data System (ADS)
Kirschner, Matthias; Wesarg, Stefan
2011-03-01
Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Hastings, Mary K; Woodburn, James; Mueller, Michael J; Strube, Michael J; Johnson, Jeffrey E; Beckert, Krista S; Stein, Michelle L; Sinacore, David R
2014-01-01
Diabetic foot deformity onset and progression maybe associated with abnormal foot and ankle motion. The modified Oxford multi-segmental foot model allows kinematic assessment of inter-segmental foot motion. However, there are insufficient anatomical landmarks to accurately representation the alignment of the hindfoot and forefoot segments during model construction. This is most notable for the sagittal plane which is referenced parallel to the floor, allowing comparison of inter-segmental excursion but not capturing important sagittal hind-to-forefoot deformity associated with diabetic foot disease and can potentially underestimate true kinematic differences. The purpose of the study was to compare walking kinematics using local coordinate systems derived from the modified Oxford model and the radiographic directed model which incorporated individual calcaneal and 1st metatarsal declination pitch angles for the hindfoot and forefoot. We studied twelve participants in each of the following groups: (1) diabetes mellitus, peripheral neuropathy and medial column foot deformity (DMPN+), (2) DMPN without medial column deformity (DMPN-) and (3) age- and weight-match controls. The modified Oxford model coordinate system did not identify differences between groups in the initial, peak, final, or excursion hindfoot relative to shank or forefoot relative to hindfoot dorsiflexion/plantarflexion during walking. The radiographic coordinate system identified the DMPN+ group to have an initial, peak and final position of the forefoot relative to hindfoot that was more dorsiflexed (lower arch phenotype) than the DMPN- group (p<.05). Use of radiographic alignment in kinematic modeling of those with foot deformity reveals segmental motion occurring upon alignment indicative of a lower arch. Copyright © 2014 Elsevier B.V. All rights reserved.
Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor
2017-05-12
Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The Spectral Imaging Toolbox can be downloaded from https://uk.mathworks.com/matlabcentral/fileexchange/62617-spectral-imaging-toolbox .
Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.
2005-01-01
NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
The open for business model of the bithorax complex in Drosophila.
Maeda, Robert K; Karch, François
2015-09-01
After nearly 30 years of effort, Ed Lewis published his 1978 landmark paper in which he described the analysis of a series of mutations that affect the identity of the segments that form along the anterior-posterior (AP) axis of the fly (Lewis 1978). The mutations behaved in a non-canonical fashion in complementation tests, forming what Ed Lewis called a "pseudo-allelic" series. Because of this, he never thought that the mutations represented segment-specific genes. As all of these mutations were grouped to a particular area of the Drosophila third chromosome, the locus became known of as the bithorax complex (BX-C). One of the key findings of Lewis' article was that it revealed for the first time, to a wide scientific audience, that there was a remarkable correlation between the order of the segment-specific mutations along the chromosome and the order of the segments they affected along the AP axis. In Ed Lewis' eyes, the mutants he discovered affected "segment-specific functions" that were sequentially activated along the chromosome as one moves from anterior to posterior along the body axis (the colinearity concept now cited in elementary biology textbooks). The nature of the "segment-specific functions" started to become clear when the BX-C was cloned through the pioneering chromosomal walk initiated in the mid 1980s by the Hogness and Bender laboratories (Bender et al. 1983a; Karch et al. 1985). Through this molecular biology effort, and along with genetic characterizations performed by Gines Morata's group in Madrid (Sanchez-Herrero et al. 1985) and Robert Whittle's in Sussex (Tiong et al. 1985), it soon became clear that the whole BX-C encoded only three protein-coding genes (Ubx, abd-A, and Abd-B). Later, immunostaining against the Ubx protein hinted that the segment-specific functions could, in fact, be cis-regulatory elements regulating the expression of the three protein-coding genes. In 1987, Peifer, Karch, and Bender proposed a comprehensive model of the functioning of the BX-C, in which the "segment-specific functions" appear as segment-specific enhancers regulating, Ubx, abd-A, or Abd-B (Peifer et al. 1987). Key to their model was that the segmental address of these enhancers was not an inherent ability of the enhancers themselves, but was determined by the chromosomal location in which they lay. In their view, the sequential activation of the segment-specific functions resulted from the sequential opening of chromatin domains along the chromosome as one moves from anterior to posterior. This model soon became known of as the open for business model. While the open for business model is quite easy to visualize at a conceptual level, molecular evidence to validate this model has been missing for almost 30 years. The recent publication describing the outstanding, joint effort from the Bender and Kingston laboratories now provides the missing proof to support this model (Bowman et al. 2014). The purpose of this article is to review the open for business model and take the reader through the genetic arguments that led to its elaboration.
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.
2008-01-01
The structural analyses described in the present report were performed in support of the NASA Engineering and Safety Center (NESC) Critical Initial Flaw Size (CIFS) assessment for the ARES I-X Upper Stage Simulator (USS) common shell segment. The structural analysis effort for the NESC assessment had three thrusts: shell buckling analyses, detailed stress analyses of the single-bolt joint test; and stress analyses of two-segment 10 degree-wedge models for the peak axial tensile running load. Elasto-plastic, large-deformation simulations were performed. Stress analysis results indicated that the stress levels were well below the material yield stress for the bounding axial tensile design load. This report also summarizes the analyses and results from parametric studies on modeling the shell-to-gusset weld, flange-surface mismatch, bolt preload, and washer-bearing-surface modeling. These analyses models were used to generate the stress levels specified for the fatigue crack growth assessment using the design load with a factor of safety.
Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity
Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin
2016-01-01
An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ±40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design. PMID:26805844
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
Lithospheric buckling and intra-arc stresses: A mechanism for arc segmentation
NASA Technical Reports Server (NTRS)
Nelson, Kerri L.
1989-01-01
Comparison of segment development of a number of arcs has shown that consistent relationships between segmentation, volcanism and variable stresses exists. Researchers successfully modeled these relationships using the conceptual model of lithospheric buckling of Yamaoka et al. (1986; 1987). Lithosphere buckling (deformation) provides the needed mechanism to explain segmentation phenomenon; offsets in volcanic fronts, distribution of calderas within segments, variable segment stresses and the chemical diversity seen between segment boundary and segment interior magmas.
Geospatial Characterization of Fluvial Wood Arrangement in a Semi-confined Alluvial River
NASA Astrophysics Data System (ADS)
Martin, D. J.; Harden, C. P.; Pavlowsky, R. T.
2014-12-01
Large woody debris (LWD) has become universally recognized as an integral component of fluvial systems, and as a result, has become increasingly common as a river restoration tool. However, "natural" processes of wood recruitment and the subsequent arrangement of LWD within the river network are poorly understood. This research used a suite of spatial statistics to investigate longitudinal arrangement patterns of LWD in a low-gradient, Midwestern river. First, a large-scale GPS inventory of LWD, performed on the Big River in the eastern Missouri Ozarks, resulted in over 4,000 logged positions of LWD along seven river segments that covered nearly 100 km of the 237 km river system. A global Moran's I analysis indicates that LWD density is spatially autocorrelated and displays a clustering tendency within all seven river segments (P-value range = 0.000 to 0.054). A local Moran's I analysis identified specific locations along the segments where clustering occurs and revealed that, on average, clusters of LWD density (high or low) spanned 400 m. Spectral analyses revealed that, in some segments, LWD density is spatially periodic. Two segments displayed strong periodicity, while the remaining segments displayed varying degrees of noisiness. Periodicity showed a positive association with gravel bar spacing and meander wavelength, although there were insufficient data to statistically confirm the relationship. A wavelet analysis was then performed to investigate periodicity relative to location along the segment. The wavelet analysis identified significant (α = 0.05) periodicity at discrete locations along each of the segments. Those reaches yielding strong periodicity showed stronger relationships between LWD density and the geomorphic/riparian independent variables tested. Analyses consistently identified valley width and sinuosity as being associated with LWD density. The results of these analyses contribute a new perspective on the longitudinal distribution of LWD in a river system, which should help identify physical and/or riparian control mechanisms of LWD arrangement and support the development of models of LWD arrangement. Additionally, the spatial statistical tools presented here have shown to be valuable for identifying longitudinal patterns in river system components.
Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Huang, Wenhua; Li, Jianyi
2016-08-01
It is a difficult and frustrating task for young surgeons and medical students to understand the anatomy of hepatic segments. We tried to develop an optimal 3D printing model of hepatic segments as a teaching aid to improve the teaching of hepatic segments. A fresh human cadaveric liver without hepatic disease was CT scanned. After 3D reconstruction, three types of 3D computer models of hepatic structures were designed and 3D printed as models of hepatic segments without parenchyma (type 1) and with transparent parenchyma (type 2), and hepatic ducts with segmental partitions (type 3). These models were evaluated by six experts using a five-point Likert scale. Ninety two medical freshmen were randomized into four groups to learn hepatic segments with the aid of the three types of models and traditional anatomic atlas (TAA). Their results of two quizzes were compared to evaluate the teaching effects of the four methods. Three types of models were successful produced which displayed the structures of hepatic segments. By experts' evaluation, type 3 model was better than type 1 and 2 models in anatomical condition, type 2 and 3 models were better than type 1 model in tactility, and type 3 model was better than type 1 model in overall satisfaction (P < 0.05). The first quiz revealed that type 1 model was better than type 2 model and TAA, while type 3 model was better than type 2 and TAA in teaching effects (P < 0.05). The second quiz found that type 1 model was better than TAA, while type 3 model was better than type 2 model and TAA regarding teaching effects (P < 0.05). Only TAA group had significant declines between two quizzes (P < 0.05). The model with segmental partitions proves to be optimal, because it can best improve anatomical teaching about hepatic segments.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
NASA Astrophysics Data System (ADS)
Jiang, Zhen-Yu; Li, Lin; Huang, Yi-Fan
2009-07-01
The segmented mirror telescope is widely used. The aberrations of segmented mirror systems are different from single mirror systems. This paper uses the Fourier optics theory to analyse the Zernike aberrations of segmented mirror systems. It concludes that the Zernike aberrations of segmented mirror systems obey the linearity theorem. The design of a segmented space telescope and segmented schemes are discussed, and its optical model is constructed. The computer simulation experiment is performed with this optical model to verify the suppositions. The experimental results confirm the correctness of the model.
Determination of human coronary artery composition by Raman spectroscopy.
Brennan, J F; Römer, T J; Lees, R S; Tercyak, A M; Kramer, J R; Feld, M S
1997-07-01
We present a method for in situ chemical analysis of human coronary artery using near-infrared Raman spectroscopy. It is rapid and accurate and does not require tissue removal; small volumes, approximately 1 mm3, can be sampled. This methodology is likely to be useful as a tool for intravascular diagnosis of artery disease. Human coronary artery segments were obtained from nine explanted recipient hearts within 1 hour of heart transplantation. Minces from one or more segments were obtained through grinding in a mortar and pestle containing liquid nitrogen. Artery segments and minces were excited with 830 nm near-infrared light, and Raman spectra were collected with a specially designed spectrometer. A model was developed to analyze the spectra and quantify the amounts of cholesterol, cholesterol esters, triglycerides and phospholipids, and calcium salts present. The model provided excellent fits to spectra from the artery segments, indicating its applicability to intact tissue. In addition, the minces were assayed chemically for lipid and calcium salt content, and the results were compared. The relative weights obtained using the Raman technique agreed with those of the standard assays within a few percentage points. The chemical composition of coronary artery can be quantified accurately with Raman spectroscopy. This opens the possibility of using histochemical analysis to predict acute events such as plaque rupture, to follow the progression of disease, and to select appropriate therapeutic interventions.
Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population
Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel
2002-01-01
A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.
NASA Astrophysics Data System (ADS)
Reyes López, Misael; Arámbula Cosío, Fernando
2017-11-01
The cerebellum is an important structure to determine the gestational age of the fetus, moreover most of the abnormalities it presents are related to growth disorders. In this work, we present the results of the segmentation of the fetal cerebellum applying statistical shape and appearance models. Both models were tested on ultrasound images of the fetal brain taken from 23 pregnant women, between 18 and 24 gestational weeks. The accuracy results obtained on 11 ultrasound images show a mean Hausdorff distance of 6.08 mm between the manual segmentation and the segmentation using active shape model, and a mean Hausdorff distance of 7.54 mm between the manual segmentation and the segmentation using active appearance model. The reported results demonstrate that the active shape model is more robust in the segmentation of the fetal cerebellum in ultrasound images.
Crash energy absorption of two-segment crash box with holes under frontal load
NASA Astrophysics Data System (ADS)
Choiron, Moch. Agus; Sudjito, Hidayati, Nafisah Arina
2016-03-01
Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base. Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.
NASA Astrophysics Data System (ADS)
Somasundaram, Elanchezhian; Kaufman, Robert; Brady, Samuel
2017-03-01
The development of a random forests machine learning technique is presented for fully-automated neck, chest, abdomen, and pelvis tissue segmentation of CT images using Trainable WEKA (Waikato Environment for Knowledge Analysis) Segmentation (TWS) plugin of FIJI (ImageJ, NIH). The use of a single classifier model to segment six tissue classes (lung, fat, muscle, solid organ, blood/contrast agent, bone) in the CT images is studied. An automated unbiased scheme to sample pixels from the training images and generate a balanced training dataset over the seven classes is also developed. Two independent training datasets are generated from a pool of 4 adult (>55 kg) and 3 pediatric patients (<=55 kg) with 7 manually contoured slices for each patient. Classifier training investigated 28 image filters comprising a total of 272 features. Highly correlated and insignificant features are eliminated using Correlated Feature Subset (CFS) selection with Best First Search (BFS) algorithms in WEKA. The 2 training models (from the 2 training datasets) had 74 and 71 input training features, respectively. The study also investigated the effect of varying the number of trees (25, 50, 100, and 200) in the random forest algorithm. The performance of the 2 classifier models are evaluated on inter-patient intra-slice, intrapatient inter-slice and inter-patient inter-slice test datasets. The Dice similarity coefficients (DSC) and confusion matrices are used to understand the performance of the classifiers across the tissue segments. The effect of number of features in the training input on the performance of the classifiers for tissue classes with less than optimal DSC values is also studied. The average DSC values for the two training models on the inter-patient intra-slice test data are: 0.98, 0.89, 0.87, 0.79, 0.68, and 0.84, for lung, fat, muscle, solid organ, blood/contrast agent, and bone, respectively. The study demonstrated that a robust segmentation accuracy for lung, muscle and fat tissue classes. For solid-organ, blood/contrast and bone, the performance of the segmentation pipeline improved significantly by using the advanced capabilities of WEKA. However, further improvements are needed to reduce the noise in the segmentation.
Thermal Analysis of Compressible CO2 Flow for Major Equipment of Fire Detection System
NASA Technical Reports Server (NTRS)
Zhang, Michael Y.; Lee, Wen-Ching; Keener, John F.; Smith, Frederick D.
2001-01-01
A thermal analysis of the compressible CO2 flow for the Portable Fire Extinguisher (PFE) system has been performed. The purpose of this analysis is to determine the discharged CO2 mass from the PFE tank through the Temporary Sleep Station (TeSS) nozzle in reflecting to the latest design of the extended nozzle, and to evaluate the thermal issues associated to the latest nozzle configuration. A SINDA/FLUINT model has been developed for this analysis. The model includes the PFE tank and the TeSS nozzle, and both have initial temperature of 72 of. In order to investigate the thermal effect on the nozzle due to discharging C02, the PFE TeSS nozzle pipe has been divided into three segments. This model also includes heat transfer predictions for PFE tank inner and outer wall surfaces. The simulation results show that the CO2 discharge rates have fulfilled the minimum flow requirements that the PFE system discharges 3.0 Ibm CO2 in 10 seconds and 5.5 Ibm of CO2 in 45 seconds during its operation. At 45 seconds, the PFE tank wall temperature is 63 OF, and the TeSS nozzle cover wall temperatures for the three segments are 47 OF, 53 OF and 37 OF, respectively. Thermal insulation for personal protection is used for the first two segments of the TeSS nozzle. The simulation results also indicate that at 50 seconds, the remaining CO2 in the tank may be near the triple point (gas, liquid and solid) state and, therefore, restricts the flow.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
Spaide, Richard F; Curcio, Christine A
2011-09-01
To evaluate the validity of commonly used anatomical designations for the four hyperreflective outer retinal bands seen in current-generation optical coherence tomography, a scale model of outer retinal morphology was created using published information for direct comparison with optical coherence tomography scans. Articles and books concerning histology of the outer retina from 1900 until 2009 were evaluated, and data were used to create a scale model drawing. Boundaries between outer retinal tissue compartments described by the model were compared with intensity variations of representative spectral-domain optical coherence tomography scans using longitudinal reflectance profiles to determine the region of origin of the hyperreflective outer retinal bands. This analysis showed a high likelihood that the spectral-domain optical coherence tomography bands attributed to the external limiting membrane (the first, innermost band) and to the retinal pigment epithelium (the fourth, outermost band) are correctly attributed. Comparative analysis showed that the second band, often attributed to the boundary between inner and outer segments of the photoreceptors, actually aligns with the ellipsoid portion of the inner segments. The third band corresponded to an ensheathment of the cone outer segments by apical processes of the retinal pigment epithelium in a structure known as the contact cylinder. Anatomical attributions and subsequent pathophysiologic assessments pertaining to the second and third outer retinal hyperreflective bands may not be correct. This analysis has identified testable hypotheses for the actual correlates of the second and third bands. Nonretinal pigment epithelium contributions to the fourth band (e.g., Bruch membrane) remain to be determined.
Dong, Liang; Xu, Zhengwei; Chen, Xiujin; Wang, Dongqi; Li, Dichen; Liu, Tuanjing; Hao, Dingjun
2017-10-01
Many meta-analyses have been performed to study the efficacy of cervical disc arthroplasty (CDA) compared with anterior cervical discectomy and fusion (ACDF); however, there are few data referring to adjacent segment within these meta-analyses, or investigators are unable to arrive at the same conclusion in the few meta-analyses about adjacent segment. With the increased concerns surrounding adjacent segment degeneration (ASDeg) and adjacent segment disease (ASDis) after anterior cervical surgery, it is necessary to perform a comprehensive meta-analysis to analyze adjacent segment parameters. To perform a comprehensive meta-analysis to elaborate adjacent segment motion, degeneration, disease, and reoperation of CDA compared with ACDF. Meta-analysis of randomized controlled trials (RCTs). PubMed, Embase, and Cochrane Library were searched for RCTs comparing CDA and ACDF before May 2016. The analysis parameters included follow-up time, operative segments, adjacent segment motion, ASDeg, ASDis, and adjacent segment reoperation. The risk of bias scale was used to assess the papers. Subgroup analysis and sensitivity analysis were used to analyze the reason for high heterogeneity. Twenty-nine RCTs fulfilled the inclusion criteria. Compared with ACDF, the rate of adjacent segment reoperation in the CDA group was significantly lower (p<.01), and the advantage of that group in reducing adjacent segment reoperation increases with increasing follow-up time by subgroup analysis. There was no statistically significant difference in ASDeg between CDA and ACDF within the 24-month follow-up period; however, the rate of ASDeg in CDA was significantly lower than that of ACDF with the increase in follow-up time (p<.01). There was no statistically significant difference in ASDis between CDA and ACDF (p>.05). Cervical disc arthroplasty provided a lower adjacent segment range of motion (ROM) than did ACDF, but the difference was not statistically significant. Compared with ACDF, the advantages of CDA were lower ASDeg and adjacent segment reoperation. However, there was no statistically significant difference in ASDis and adjacent segment ROM. Copyright © 2017 Elsevier Inc. All rights reserved.
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-03-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-02-11
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-01-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794
Estimates of Median Flows for Streams on the 1999 Kansas Surface Water Register
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
The Kansas State Legislature, by enacting Kansas Statute KSA 82a?2001 et. seq., mandated the criteria for determining which Kansas stream segments would be subject to classification by the State. One criterion for the selection as a classified stream segment is based on the statistic of median flow being equal to or greater than 1 cubic foot per second. As specified by KSA 82a?2001 et. seq., median flows were determined from U.S. Geological Survey streamflow-gaging-station data by using the most-recent 10 years of gaged data (KSA) for each streamflow-gaging station. Median flows also were determined by using gaged data from the entire period of record (all-available hydrology, AAH). Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating median flows for uncontrolled stream segments. The drainage area of the gaging stations on uncontrolled stream segments used in the regression analyses ranged from 2.06 to 12,004 square miles. A logarithmic transformation of the data was needed to develop the best linear relation for computing median flows. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. Tobit analyses of KSA data yielded a model standard error of prediction of 0.285 logarithmic units, and the best equations using Tobit analyses of AAH data had a model standard error of prediction of 0.250 logarithmic units. These regression equations and an interpolation procedure were used to compute median flows for the uncontrolled stream segments on the 1999 Kansas Surface Water Register. Measured median flows from gaging stations were incorporated into the regression-estimated median flows along the stream segments where available. The segments that were uncontrolled were interpolated using gaged data weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled segments of Kansas streams, the median flow information was interpolated between gaging stations using only gaged data weighted by drainage area. Of the 2,232 total stream segments on the Kansas Surface Water Register, 34.5 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second when the KSA analysis was used. When the AAH analysis was used, 36.2 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second. This report supercedes U.S. Geological Survey Water-Resources Investigations Report 02?4292.
A unified EM approach to bladder wall segmentation with coupled level-set constraints
Han, Hao; Li, Lihong; Duan, Chaijie; Zhang, Hao; Zhao, Yang; Liang, Zhengrong
2013-01-01
Magnetic resonance (MR) imaging-based virtual cystoscopy (VCys), as a non-invasive, safe and cost-effective technique, has shown its promising virtue for early diagnosis and recurrence management of bladder carcinoma. One primary goal of VCys is to identify bladder lesions with abnormal bladder wall thickness, and consequently a precise segmentation of the inner and outer borders of the wall is required. In this paper, we propose a unified expectation-maximization (EM) approach to the maximum-a-posteriori (MAP) solution of bladder wall segmentation, by integrating a novel adaptive Markov random field (AMRF) model and the coupled level-set (CLS) information into the prior term. The proposed approach is applied to the segmentation of T1-weighted MR images, where the wall is enhanced while the urine and surrounding soft tissues are suppressed. By introducing scale-adaptive neighborhoods as well as adaptive weights into the conventional MRF model, the AMRF model takes into account the local information more accurately. In order to mitigate the influence of image artifacts adjacent to the bladder wall and to preserve the continuity of the wall surface, we apply geometrical constraints on the wall using our previously developed CLS method. This paper not only evaluates the robustness of the presented approach against the known ground truth of simulated digital phantoms, but further compares its performance with our previous CLS approach via both volunteer and patient studies. Statistical analysis on experts’ scores of the segmented borders from both approaches demonstrates that our new scheme is more effective in extracting the bladder wall. Based on the wall thickness calibrated from the segmented single-layer borders, a three-dimensional virtual bladder model can be constructed and the wall thickness can be mapped on to the model, where the bladder lesions will be eventually detected via experts’ visualization and/or computer-aided detection. PMID:24001932
Simulation research on the process of large scale ship plane segmentation intelligent workshop
NASA Astrophysics Data System (ADS)
Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei
2017-04-01
Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.
Ares I-X Flight Test Vehicle:Stack 1 Modal Test
NASA Technical Reports Server (NTRS)
Buehrle, Ralph D.; Templeton, Justin D.; Reaves, Mercedes C.; Horta, Lucas G.; Gaspar, James L.; Bartolotta, Paul A.; Parks, Russel A.; Lazor, Daniel R.
2010-01-01
Ares I-X was the first flight test vehicle used in the development of NASA s Ares I crew launch vehicle. The Ares I-X used a 4-segment reusable solid rocket booster from the Space Shuttle heritage with mass simulators for the 5th segment, upper stage, crew module and launch abort system. Three modal tests were defined to verify the dynamic finite element model of the Ares I-X flight test vehicle. Test configurations included two partial stacks and the full Ares I-X flight test vehicle on the Mobile Launcher Platform. This report focuses on the second modal test that was performed on the middle section of the vehicle referred to as Stack 1, which consisted of the subassembly from the 5th segment simulator through the interstage. This report describes the test requirements, constraints, pre-test analysis, test operations and data analysis for the Ares I-X Stack 1 modal test.
Takayasu, Hideki; Takayasu, Misako
2017-01-01
We extend the concept of statistical symmetry as the invariance of a probability distribution under transformation to analyze binary sign time series data of price difference from the foreign exchange market. We model segments of the sign time series as Markov sequences and apply a local hypothesis test to evaluate the symmetries of independence and time reversion in different periods of the market. For the test, we derive the probability of a binary Markov process to generate a given set of number of symbol pairs. Using such analysis, we could not only segment the time series according the different behaviors but also characterize the segments in terms of statistical symmetries. As a particular result, we find that the foreign exchange market is essentially time reversible but this symmetry is broken when there is a strong external influence. PMID:28542208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yanrong; Shao, Yeqin; Gao, Yaozong
Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integratemore » the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.« less
Advances in segmentation modeling for health communication and social marketing campaigns.
Albrecht, T L; Bryant, C
1996-01-01
Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.
A model to identify high crash road segments with the dynamic segmentation method.
Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan
2014-12-01
Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-Ping; Chughtai, Aamer
2014-08-15
Purpose: The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors’ coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. Methods: The heart region in cCTA is segmented and the vascular structures are enhanced using the authors’ multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left andmore » right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors’ patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. Results: The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. Conclusions: The authors’ MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.« less
Inter-segment foot motion in girls using a three-dimensional multi-segment foot model.
Jang, Woo Young; Lee, Dong Yeon; Jung, Hae Woon; Lee, Doo Jae; Yoo, Won Joon; Choi, In Ho
2018-05-06
Several multi-segment foot models (MFMs) have been introduced for in vivo analyses of dynamic foot kinematics. However, the normal gait patterns of healthy children and adolescents remain uncharacterized. We sought to determine normal foot kinematics according to age in clinically normal female children and adolescents using a Foot 3D model. Fifty-eight girls (age 7-17 years) with normal function and without radiographic abnormalities were tested. Three representative strides from five separate trials were analyzed. Kinematic data of foot segment motion were tracked and evaluated using an MFM with a 15-marker set (Foot 3D model). As controls, 50 symptom-free female adults (20-35 years old) were analyzed. In the hindfoot kinematic analysis, plantar flexion motion in the pre-swing phase was significantly greater in girls aged 11 years or older than in girls aged <11 years, thereby resulting in a larger sagittal range of motion. Coronal plane hindfoot motion exhibited pronation, whereas transverse plane hindfoot motion exhibited increased internal rotation in girls aged <11 years. Hallux valgus angles increased significantly in girls aged 11 years or older. The foot progression angle showed mildly increased internal rotation in the loading response phase and the swing phase in girls aged <11 years old. The patterns of inter-segment foot motion in girls aged 11 years or older showed low-arch kinematic characteristics, whereas those in girls aged 11 years or older were more similar to the patterns in young adult women. Copyright © 2018 Elsevier B.V. All rights reserved.
A comprehensive segmentation analysis of crude oil market based on time irreversibility
NASA Astrophysics Data System (ADS)
Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi
2016-05-01
In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; Abdelaziz, Omar; Qu, Ming
This paper introduces a first-order physics-based model that accounts for the fundamental heat and mass transfer between a humid-air vapor stream on feed side to another flow stream on permeate side. The model comprises a few optional submodels for membrane mass transport; and it adopts a segment-by-segment method for discretizing heat and mass transfer governing equations for flow streams on feed and permeate sides. The model is able to simulate both dehumidifiers and energy recovery ventilators in parallel-flow, cross-flow, and counter-flow configurations. The predicted tresults are compared reasonably well with the measurements. The open-source codes are written in C++. Themore » model and open-source codes are expected to become a fundament tool for the analysis of membrane-based dehumidification in the future.« less
Lee, Chang-Hyun; Kim, Young Eun; Lee, Hak Joong; Kim, Dong Gyu; Kim, Chi Heon
2017-12-01
OBJECTIVE Pedicle screw-rod-based hybrid stabilization (PH) and interspinous device-based hybrid stabilization (IH) have been proposed to prevent adjacent-segment degeneration (ASD) and their effectiveness has been reported. However, a comparative study based on sound biomechanical proof has not yet been reported. The aim of this study was to compare the biomechanical effects of IH and PH on the transition and adjacent segments. METHODS A validated finite element model of the normal lumbosacral spine was used. Based on the normal model, a rigid fusion model was immobilized at the L4-5 level by a rigid fixator. The DIAM or NFlex model was added on the L3-4 segment of the fusion model to construct the IH and PH models, respectively. The developed models simulated 4 different loading directions using the hybrid loading protocol. RESULTS Compared with the intact case, fusion on L4-5 produced 18.8%, 9.3%, 11.7%, and 13.7% increments in motion at L3-4 under flexion, extension, lateral bending, and axial rotation, respectively. Additional instrumentation at L3-4 (transition segment) in hybrid models reduced motion changes at this level. The IH model showed 8.4%, -33.9%, 6.9%, and 2.0% change in motion at the segment, whereas the PH model showed -30.4%, -26.7%, -23.0%, and 12.9%. At L2-3 (adjacent segment), the PH model showed 14.3%, 3.4%, 15.0%, and 0.8% of motion increment compared with the motion in the IH model. Both hybrid models showed decreased intradiscal pressure (IDP) at the transition segment compared with the fusion model, but the pressure at L2-3 (adjacent segment) increased in all loading directions except under extension. CONCLUSIONS Both IH and PH models limited excessive motion and IDP at the transition segment compared with the fusion model. At the segment adjacent to the transition level, PH induced higher stress than IH model. Such differences may eventually influence the likelihood of ASD.
Short segment search method for phylogenetic analysis using nested sliding windows
NASA Astrophysics Data System (ADS)
Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.
2017-10-01
To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.
Automatic pelvis segmentation from x-ray images of a mouse model
NASA Astrophysics Data System (ADS)
Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham
2017-05-01
The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.
Interactive Tooth Separation from Dental Model Using Segmentation Field
2016-01-01
Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong
2011-01-01
Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.
A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software
NASA Technical Reports Server (NTRS)
Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.
2016-01-01
This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.
NASA Astrophysics Data System (ADS)
Wang, Yaoping; Chui, Cheekong K.; Cai, Yiyu; Mak, KoonHou
1998-06-01
This study presents an approach to build a 3D vascular system of coronary for the development of a virtual cardiology simulator. The 3D model of the coronary arterial tree is reconstructed from the geometric information segmented from the Visible Human data set for physical analysis of catheterization. The process of segmentation is guided by a 3D topologic hierarchy structure of coronary vessels which is obtained from a mechanical model by using Coordinate Measuring Machine (CMM) probing. This mechanical professional model includes all major coronary arterials ranging from right coronary artery to atrioventricular branch and from left main trunk to left anterior descending branch. All those branches are considered as the main operating sites for cardiology catheterization. Along with the primary arterial vasculature and accompanying secondary and tertiary networks obtained from a previous work, a more complete vascular structure can then be built for the simulation of catheterization. A novel method has been developed for real time Finite Element Analysis of catheter navigation based on this featured vasculature of vessels.
A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI
NASA Astrophysics Data System (ADS)
Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina
2015-03-01
Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.
Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf
2018-01-01
Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.
Tang, Sai Chun; McDannold, Nathan J.
2015-01-01
This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems. PMID:26640745
Tang, Sai Chun; McDannold, Nathan J
2015-03-01
This paper investigated the power losses of unsegmented and segmented energy coupling coils for wireless energy transfer. Four 30-cm energy coupling coils with different winding separations, conductor cross-sectional areas, and number of turns were developed. The four coils were tested in both unsegmented and segmented configurations. The winding conduction and intrawinding dielectric losses of the coils were evaluated individually based on a well-established lumped circuit model. We found that the intrawinding dielectric loss can be as much as seven times higher than the winding conduction loss at 6.78 MHz when the unsegmented coil is tightly wound. The dielectric loss of an unsegmented coil can be reduced by increasing the winding separation or reducing the number of turns, but the power transfer capability is reduced because of the reduced magnetomotive force. Coil segmentation using resonant capacitors has recently been proposed to significantly reduce the operating voltage of a coil to a safe level in wireless energy transfer for medical implants. Here, we found that it can naturally eliminate the dielectric loss. The coil segmentation method and the power loss analysis used in this paper could be applied to the transmitting, receiving, and resonant coils in two- and four-coil energy transfer systems.
The standard WASP7 stream transport model calculates water flow through a branching stream network that may include both free-flowing and ponded segments. This supplemental user manual documents the hydraulic algorithms, including the transport and hydrogeometry equations, the m...
Counterconformity: An Attribution Model of Adolescents' Uniqueness-Seeking Behaviors in Dressing
ERIC Educational Resources Information Center
Ling, I-Ling
2008-01-01
This article explores how an attribution model will illustrate uniqueness-seeking behavior in dressing in the Taiwanese adolescent subculture. The study employed 443 senior high school students. Results show that the tendency of uniqueness-seeking behavior in dressing is moderate. However, using cluster analysis to segment the counterconformity…
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
Words in Puddles of Sound: Modelling Psycholinguistic Effects in Speech Segmentation
ERIC Educational Resources Information Center
Monaghan, Padraic; Christiansen, Morten H.
2010-01-01
There are numerous models of how speech segmentation may proceed in infants acquiring their first language. We present a framework for considering the relative merits and limitations of these various approaches. We then present a model of speech segmentation that aims to reveal important sources of information for speech segmentation, and to…
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James
2016-01-01
Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
3D robust Chan-Vese model for industrial computed tomography volume data segmentation
NASA Astrophysics Data System (ADS)
Liu, Linghui; Zeng, Li; Luan, Xiao
2013-11-01
Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2016-06-01
Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.
Segmentation of time series with long-range fractal correlations
Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.
2012-01-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2007-03-01
The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.
Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions
NASA Astrophysics Data System (ADS)
Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2015-03-01
Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.
Song, Jie; Xiao, Liang; Lian, Zhichao
2017-03-01
This paper presents a novel method for automated morphology delineation and analysis of cell nuclei in histopathology images. Combining the initial segmentation information and concavity measurement, the proposed method first segments clusters of nuclei into individual pieces, avoiding segmentation errors introduced by the scale-constrained Laplacian-of-Gaussian filtering. After that a nuclear boundary-to-marker evidence computing is introduced to delineate individual objects after the refined segmentation process. The obtained evidence set is then modeled by the periodic B-splines with the minimum description length principle, which achieves a practical compromise between the complexity of the nuclear structure and its coverage of the fluorescence signal to avoid the underfitting and overfitting results. The algorithm is computationally efficient and has been tested on the synthetic database as well as 45 real histopathology images. By comparing the proposed method with several state-of-the-art methods, experimental results show the superior recognition performance of our method and indicate the potential applications of analyzing the intrinsic features of nuclei morphology.
Improvements in analysis techniques for segmented mirror arrays
NASA Astrophysics Data System (ADS)
Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.
2016-08-01
The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Brain MR image segmentation using NAMS in pseudo-color.
Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong
2017-12-01
Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.
Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan
2018-06-01
Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.
Automatic 2D and 3D segmentation of liver from Computerised Tomography
NASA Astrophysics Data System (ADS)
Evans, Alun
As part of the diagnosis of liver disease, a Computerised Tomography (CT) scan is taken of the patient, which the clinician then uses for assistance in determining the presence and extent of the disease. This thesis presents the background, methodology, results and future work of a project that employs automated methods to segment liver tissue. The clinical motivation behind this work is the desire to facilitate the diagnosis of liver disease such as cirrhosis or cancer, assist in volume determination for liver transplantation, and possibly assist in measuring the effect of any treatment given to the liver. Previous attempts at automatic segmentation of liver tissue have relied on 2D, low-level segmentation techniques, such as thresholding and mathematical morphology, to obtain the basic liver structure. The derived boundary can then be smoothed or refined using more advanced methods. The 2D results presented in this thesis improve greatly on this previous work by using a topology adaptive active contour model to accurately segment liver tissue from CT images. The use of conventional snakes for liver segmentation is difficult due to the presence of other organs closely surrounding the liver this new technique avoids this problem by adding an inflationary force to the basic snake equation, and initialising the snake inside the liver. The concepts underlying the 2D technique are extended to 3D, and results of full 3D segmentation of the liver are presented. The 3D technique makes use of an inflationary active surface model which is adaptively reparameterised, according to its size and local curvature, in order that it may more accurately segment the organ. Statistical analysis of the accuracy of the segmentation is presented for 18 healthy liver datasets, and results of the segmentation of unhealthy livers are also shown. The novel work developed during the course of this project has possibilities for use in other areas of medical imaging research, for example the segmentation of internal liver structures, and the segmentation and classification of unhealthy tissue. The possibilities of this future work are discussed towards the end of the report.
Antony, Bhavna Josephine; Kim, Byung-Jin; Lang, Andrew; Carass, Aaron; Prince, Jerry L; Zack, Donald J
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study.
Lang, Andrew; Carass, Aaron; Prince, Jerry L.; Zack, Donald J.
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study. PMID:28817571
Chiang, Michael; Hallman, Sam; Cinquin, Amanda; de Mochel, Nabora Reyes; Paz, Adrian; Kawauchi, Shimako; Calof, Anne L; Cho, Ken W; Fowlkes, Charless C; Cinquin, Olivier
2015-11-25
Analysis of single cells in their native environment is a powerful method to address key questions in developmental systems biology. Confocal microscopy imaging of intact tissues, followed by automatic image segmentation, provides a means to conduct cytometric studies while at the same time preserving crucial information about the spatial organization of the tissue and morphological features of the cells. This technique is rapidly evolving but is still not in widespread use among research groups that do not specialize in technique development, perhaps in part for lack of tools that automate repetitive tasks while allowing experts to make the best use of their time in injecting their domain-specific knowledge. Here we focus on a well-established stem cell model system, the C. elegans gonad, as well as on two other model systems widely used to study cell fate specification and morphogenesis: the pre-implantation mouse embryo and the developing mouse olfactory epithelium. We report a pipeline that integrates machine-learning-based cell detection, fast human-in-the-loop curation of these detections, and running of active contours seeded from detections to segment cells. The procedure can be bootstrapped by a small number of manual detections, and outperforms alternative pieces of software we benchmarked on C. elegans gonad datasets. Using cell segmentations to quantify fluorescence contents, we report previously-uncharacterized cell behaviors in the model systems we used. We further show how cell morphological features can be used to identify cell cycle phase; this provides a basis for future tools that will streamline cell cycle experiments by minimizing the need for exogenous cell cycle phase labels. High-throughput 3D segmentation makes it possible to extract rich information from images that are routinely acquired by biologists, and provides insights - in particular with respect to the cell cycle - that would be difficult to derive otherwise.
Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method
Chen, Meizhu; Li, Jichun; Zhang, Encai
2017-01-01
As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122
Differences in 3D vs. 2D analysis in lumbar spinal fusion simulations.
Hsu, Hung-Wei; Bashkuev, Maxim; Pumberger, Matthias; Schmidt, Hendrik
2018-04-27
Lumbar interbody fusion is currently the gold standard in treating patients with disc degeneration or segmental instability. Despite it having been used for several decades, the non-union rate remains high. A failed fusion is frequently attributed to an inadequate mechanical environment after instrumentation. Finite element (FE) models can provide insights into the mechanics of the fusion process. Previous fusion simulations using FE models showed that the geometries and material of the cage can greatly influence the fusion outcome. However, these studies used axisymmetric models which lacked realistic spinal geometries. Therefore, different modeling approaches were evaluated to understand the bone-formation process. Three FE models of the lumbar motion segment (L4-L5) were developed: 2D, Sym-3D and Nonsym-3D. The fusion process based on existing mechano-regulation algorithms using the FE simulations to evaluate the mechanical environment was then integrated into these models. In addition, the influence of different lordotic angles (5, 10 and 15°) was investigated. The volume of newly formed bone, the axial stiffness of the whole segment and bone distribution inside and surrounding the cage were evaluated. In contrast to the Nonsym-3D, the 2D and Sym-3D models predicted excessive bone formation prior to bridging (peak values with 36 and 9% higher than in equilibrium, respectively). The 3D models predicted a more uniform bone distribution compared to the 2D model. The current results demonstrate the crucial role of the realistic 3D geometry of the lumbar motion segment in predicting bone formation after lumbar spinal fusion. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gardella, Joseph A.; Mahoney, Christine M.
2004-06-01
While many XPS and SIMS studies of polymers have detected and quantified segregation of low surface energy blocks or components in copolymers and polymer blends [D. Briggs, in: D.R. Clarke, S. Suresh, I.M. Ward (Eds.), Surface Analysis of Polymers by XPS and Static SIMS, Cambridge University Press, Cambridge, 1998 (Chapter 5).], this paper reports ToF-SIMS studies of direct measurement of the segment length distribution at the surface of siloxane copolymers. These data allow insight into the segregation of particular portions of the oligomeric distribution; specifically, in this study, longer PDMS oligomers segregated at the expense of shorter PDMS chains. We have reported XPS analysis of competitive segregation effects for short PDMS chains [Macromolecules 35 (13) (2002) 5256]. In this study, a series of poly(ureaurethane)-poly(dimethylsiloxane) (PUU-PDMS) copolymers have been synthesized containing varying ratios of G-3 and G-9 (G- X describes the average segment length of the PDMS added), while maintaining a constant overall siloxane weight percentage (10, 30, and 60%). These copolymers were utilized as model systems to study the preferential segregation of certain siloxane segment lengths to the surface over others. ToF-SIMS analysis of PUU-PDMS copolymers has yielded high-mass range copolymer fragmentation patterns containing intact PDMS segments. For the first time, this information is utilized to determine PDMS segment length distributions at the copolymer surface as compared to the bulk. The results show that longer siloxane segment lengths are preferentially segregating to the surface over shorter chain lengths. These results also show the importance of ToF-SIMS and mass spectrometry in the development of new materials containing low molecular weight amino-propyl-terminated siloxanes.
Variations of Oceanic Crust in the Northeastern Gulf of Mexico From Integrated Geophysical Analysis
NASA Astrophysics Data System (ADS)
Liu, M.; Filina, I.
2017-12-01
Tectonic history of the Gulf of Mexico remains a subject of debate due to structural complexity of the area and lack of geological constraints. In this study, we focus our investigation on oceanic domain of the northeastern Gulf of Mexico to characterize the crustal distribution and structures. We use published satellite derived potential fields (gravity and magnetics), seismic refraction data (GUMBO3 and GUMBO4) and well logs to build the subsurface models that honor all available datasets. In the previous study, we have applied filters to potential fields grids and mapped the segments of an extinct mid-ocean ridge, ocean-continent boundary (OCB) and several transform faults in our study area. We also developed the 2D potential fields model for seismic profile GUMBO3 (Eddy et al., 2014). The objectives of this study are: 1) to develop a similar model for another seismic profile GUMBO 4 (Christeson, 2014) and derive subsurface properties (densities and magnetic susceptibilities), 2) to compare and contrast the two models, 3) to establish spatial relationship between the two crustal domains. Interpreted seismic velocities for the profiles GUMBO 3 and GUMBO 4 show significant differences, suggesting that these two profiles cross different segments of oceanic crust. The total crustal thickness along GUMBO 3 is much thicker (up to 10 km) than the one for GUMBO 4 (5.7 km). The upper crustal velocity along GUMBO 4 (6.0-6.7 km/s) is significantly higher than the one for GUMBO 3 ( 5.8 km/s). Based our 2D potential fields models along both of the GUMBO lines, we summarize physical properties (seismic velocities, densities and magnetic susceptibilities) for different crustal segments, which are proxies for lithologies. We use our filtered potential fields grids to establish the spatial relationship between these two segments of oceanic crust. The results of our integrated geophysical analysis will be used as additional constraints for the future tectonic reconstruction of the Gulf of Mexico.
Automatic segmentation of bones from digital hand radiographs
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia
1995-05-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.
Quantitative Analysis of Geometry and Lateral Symmetry of Proximal Middle Cerebral Artery.
Peter, Roman; Emmer, Bart J; van Es, Adriaan C G M; van Walsum, Theo
2017-10-01
The purpose of our work is to quantitatively assess clinically relevant geometric properties of proximal middle cerebral arteries (pMCA), to investigate the degree of their lateral symmetry, and to evaluate whether the pMCA can be modeled by using state-of-the-art deformable image registration of the ipsi- and contralateral hemispheres. Individual pMCA segments were identified, quantified, and statistically evaluated on a set of 55 publicly available magnetic resonance angiography time-of-flight images. Rigid and deformable image registrations were used for geometric alignment of the ipsi- and contralateral hemispheres. Lateral symmetry of relevant geometric properties was evaluated before and after the image registration. No significant lateral differences regarding tortuosity and diameters of contralateral M1 segments of pMCA were identified. Regarding the length of M1 segment, 44% of all subjects could be considered laterally symmetrical. Dominant M2 segment was identified in 30% of men and 9% of women in both brain hemispheres. Deformable image registration performed significantly better (P < .01) than rigid registration with regard to distances between the ipsi- and the contralateral centerlines of M1 segments (1.5 ± 1.1 mm versus 2.8 ± 1.2 mm respectively) and between the M1 and the anterior cerebral artery (ACA) branching points (1.6 ± 1.4 mm after deformable registration). Although natural lateral variation of the length of M1 may not allow for sufficient modeling of the complete pMCA, deformable image registration of the contralateral brain hemisphere to the ipsilateral hemisphere is feasible for localization of ACA-M1 branching point and for modeling 71 ± 23% of M1 segment. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
NASA Astrophysics Data System (ADS)
Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.
2012-12-01
Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.
Model Validation of an RSRM Transporter Through Full-scale Operational and Modal Testing
NASA Technical Reports Server (NTRS)
Brillhart, Ralph; Davis, Joshua; Allred, Bradley
2009-01-01
The Reusable Solid Rocket Motor (RSRM) segments, which are part of the current Space Shuttle system and will provide the first stage of the Ares launch vehicle, must be transported from their manufacturing facility in Promontory, Utah, to a railhead in Corinne, Utah. This approximately 25-mile trip on secondary paved roads is accomplished using a special transporter system which lifts and conveys each individual segment. ATK Launch Systems (ATK) has recently obtained a new set of these transporters from Scheuerle, a company in Germany. The transporter is a 96-wheel, dual tractor vehicle that supports the payload via a hydraulic suspension. Since this system is a different design than was previously used, computer modeling with validation via test is required to ensure that the environment to which the segment is exposed is not too severe for this space-critical hardware. Accurate prediction of the loads imparted to the rocket motor is essential in order to prevent damage to the segment. To develop and validate a finite element model capable of such accurate predictions, ATA Engineering, Inc., teamed with ATK to perform a modal survey of the transport system, including a forward RSRM segment. A set of electrodynamic shakers was placed around the transporter at locations capable of exciting the transporter vehicle dynamics. Forces from the shakers with varying phase combinations were applied using sinusoidal sweep excitation. The relative phase of the shaker forcing functions was adjusted to match the shape characteristics of each of several target modes, thereby customizing each sweep run for exciting a particular mode. The resulting frequency response functions (FRF) from this series of sine sweeps allowed identification of all target modes and other higher-order modes, allowing good comparison to the finite element model. Furthermore, the survey-derived modal frequencies were correlated with peak frequencies observed during road-going operating tests. This correlation enabled verification of the most significant modes contributing to real-world loading of the motor segment under transport. After traditional model updating, dynamic simulation of the transportation environment was compared to the measured operating data to provided further validation of the analysis model. KEYWORDS Validation, correlation, modal test, rocket motor, transporter
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
Rimbaş, Roxana C; Mihăilă, Sorina; Enescu, Oana A; Vinereanu, Dragoş
2016-12-01
2D speckle tracking echocardiography (2DSTE) was proved to be accurate for the assessment of the RV function. However, normal values for RV strain refer mostly to 3- or 6-segment models, excluding the contribution of other RV walls to RV function. We analyze RV function by 2DSTE in a normal population, using parasternal two-(2C) and apical four-chamber (4C) RV views, and creating a new 12-segment model for a potential better definition of RV function. We prospectively evaluated 100 normals using 2DE and STE. We assessed the RV systolic function from regional strain (basal, mid, and apical), and at the level of each wall: lateral (LS), septal (SS), inferior (IS), and anterior (AS), and also global strain for 4C (4CGS), and 2C (2CGS). Global systolic strain rate (SRs) was measured from 2C and 4C views. Diastolic function was assessed from early (SRe) and late global strain rate (SRl), for both views. A total of 70 healthy individuals (48±15 years, 34 men) were suitable for concomitant 4C and 2CRV analysis. Feasibility of the STE analysis was 87.8%. We found significantly lower SS by comparison with LS, AS, and IS (P<.001). All S/SR parameters (GS, SRs, and SRe) were higher in 2C view than in 4C view (P<.001). All systolic S/SR parameters did not change with age. The early diastolic SR decreased, while the late diastolic SR increased with age. Our 12-segment RV strain model is feasible. Moreover, 2DSTE analysis using 2C and 4C views of the RV does not provide similar information. Rather, they offer complementary data. This might be of particularly clinical interest in diseases with regional RV dysfunction. © 2016, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xie, Xiongyao; Liu, Yujian; Huang, Hongwei; Du, Jun; Zhang, Fengshou; Liu, Lanbo
2007-09-01
For shield tunnelling construction in soft soil areas, the coverage uniformity and quality of consolidation of the injected grout mortar behind the prefabricated tunnel segment is the main concern for tunnel safety and ground settlement. In this paper, ground-penetrating radar (GPR) was applied to evaluate the grout behind the tunnel lining segments in Shanghai, China. The dielectric permittivity of the grout material in Shanghai Metro tunnelling construction was measured in the laboratory. Combining physical modelling results with finite different time domain numerical modelling results, we suggest that the antenna with frequency 200 MHz is well suited to penetrate the reinforced steel bar network of the tunnel lining segment and testing grout patterns behind the segment. The electromagnetic velocity of the grout behind the segment of the tunnel is 0.1 m ns-1 by the analysis of field common-middle point data. A wave-translated method was put forward to process the GPR images. Furthermore, combining the information acquired by GPR with experience data, a GPR non-destructive test standard for the grout mortar evaluation in Shanghai Metro tunnel construction was brought forward. The grout behind the tunnel lining segment is classified into three types: uncompensated grout mortar with a thickness less than 10 cm, normal grout mortar with a thickness between 10 cm and 30 cm and overcompensated grout mortar, which is more than 30 cm thick. The classified method is easily put into practice.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
Salted and preserved duck eggs: a consumer market segmentation analysis.
Arthur, Jennifer; Wiseman, Kelleen; Cheng, K M
2015-08-01
The combination of increasing ethnic diversity in North America and growing consumer support for local food products may present opportunities for local producers and processors in the ethnic foods product category. Our study examined the ethnic Chinese (pop. 402,000) market for salted and preserved duck eggs in Vancouver, British Columbia (BC), Canada. The objective of the study was to develop a segmentation model using survey data to categorize consumer groups based on their attitudes and the importance they placed on product attributes. We further used post-segmentation acculturation score, demographics and buyer behaviors to define these groups. Data were gathered via a survey of randomly selected Vancouver households with Chinese surnames (n = 410), targeting the adult responsible for grocery shopping. Results from principal component analysis and a 2-step cluster analysis suggest the existence of 4 market segments, described as Enthusiasts, Potentialists, Pragmatists, Health Skeptics (salted duck eggs), and Neutralists (preserved duck eggs). Kruskal Wallis tests and post hoc Mann-Whitney tests found significant differences between segments in terms of attitudes and the importance placed on product characteristics. Health Skeptics, preserved egg Potentialists, and Pragmatists of both egg products were significantly biased against Chinese imports compared to others. Except for Enthusiasts, segments disagreed that eggs are 'Healthy Products'. Preserved egg Enthusiasts had a significantly lower acculturation score (AS) compared to all others, while salted egg Enthusiasts had a lower AS compared to Health Skeptics. All segments rated "produced in BC, not mainland China" products in the "neutral to very likely" range for increasing their satisfaction with the eggs. Results also indicate that buyers of each egg type are willing to pay an average premium of at least 10% more for BC produced products versus imports, with all other characteristics equal. Overall results indicate that opportunities exist for local producers and processors: Chinese Canadians with lower AS form a core part of the potential market. © 2015 Poultry Science Association Inc.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
Inferring action structure and causal relationships in continuous sequences of human action.
Buchsbaum, Daphna; Griffiths, Thomas L; Plunkett, Dillon; Gopnik, Alison; Baldwin, Dare
2015-02-01
In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries. Copyright © 2014. Published by Elsevier Inc.
Analysis of role of bone compliance on mechanics of a lumbar motion segment.
Shirazi-Adl, A
1994-11-01
A large deformation elasto-static finite element formulation is developed and used for the determination of the role of bone compliance in mechanics of a lumbar motion segment. This is done by simulating each vertebra as a deformable body with realistic material properties, as a deformable body with stiffer or softer mechanical properties, as a single rigid body, or finally as two rigid bodies attached by deformable beams. The single loadings of axial compression, flexion moment, extension moment, and axial torque are considered. The results indicate the marked effect of alteration in bone material properties on biomechanics of lumbar segments specially under larger loads. The biomechanical studies of the lumbar spine should, therefore, be performed and evaluated in the light of such dependency. A model for bony vertebrae is finally proposed that preserves both the accuracy and the cost-efficiency in nonlinear finite element analyses of spinal multi-motion segment systems.
Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M
2009-01-01
Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Segments and Stutters: Early Years Teachers and Becoming-Professional
ERIC Educational Resources Information Center
Fairchild, Nikki
2017-01-01
There has been extensive research and analysis of the professionalization of early childhood educators/teachers. The recent promotion of a teacher-led workforce in England has further focused discussions on the modelling of early years teachers as professionals. In this article, the author develops an alternative analysis using the concepts of…
Analysis of design attributes and crashes on the Oregon highway system : final report.
DOT National Transportation Integrated Search
2001-08-01
This report has investigated the statistical relationship between crash activity and roadway design attributes on the Oregon state : highway system. Crash models were estimated from highway segments distinguished by functional classification (freeway...
Afar-wide Crustal Strain Field from Multiple InSAR Tracks
NASA Astrophysics Data System (ADS)
Pagli, C.; Wright, T. J.; Wang, H.; Calais, E.; Bennati Rassion, L. S.; Ebinger, C. J.; Lewi, E.
2010-12-01
Onset of a rifting episode in the Dabbahu volcanic segment, Afar (Ethiopia), in 2005 renewed interest in crustal deformation studies in the area. As a consequence, an extensive geodetic data set, including InSAR and GPS measurements have been acquired over Afar and hold great potential towards improving our understanding of the extensional processes that operate during the final stages of continental rupture. The current geodetic observational and modelling strategy has focused on detailed, localised studies of dyke intrusions and eruptions mainly in the Dabbahu segment. However, an eruption in the Erta ‘Ale volcanic segment in 2008, and cluster of earthquakes observed in the Tat Ale segment, are testament to activity elsewhere in Afar. Here we make use of the vast geodetic dataset available to obtain strain information over the whole Afar depression. A systematic analysis of all the volcanic segments, including Dabbahu, Manda-Hararo, Alayta, Tat ‘Ale Erta Ale and the Djibouti deformation zone, is undertaken. We use InSAR data from multiple tracks together with available GPS measurements to obtain a velocity field model for Afar. We use over 300 radar images acquired by the Envisat satellite in both descending and ascending orbits, from 12 distinct tracks in image and wide swath modes, spanning the time period from October 2005 to present time. We obtain the line-of-sight deformation rates from each InSAR track using a network approach and then combine the InSAR velocities with the GPS observations, as suggested by Wright and Wang (2010) following the method of England and Molnar (1997). A mesh is constructed over the Afar area and then we solve for the horizontal and vertical velocities on each node. The resultant full 3D Afar-wide velocity field shows where current strains are being accumulated within the various volcanic segments of Afar, the width of the plate boundary deformation zone and possible connections between distinct volcanic segments on a regional scale. A comparison of crustal strains from the geodetic analysis with the seismicity data will also be made.
Bishop, Chris; Arnold, John B; Fraysse, Francois; Thewlis, Dominic
2015-01-01
To investigate in-shoe foot kinematics, holes are often cut in the shoe upper to allow markers to be placed on the skin surface. However, there is currently a lack of understanding as to what is an appropriate size. This study aimed to demonstrate a method to assess whether different diameter holes were large enough to allow free motion of marker wands mounted on the skin surface during walking using a multi-segment foot model. Eighteen participants underwent an analysis of foot kinematics whilst walking barefoot and wearing shoes with different size holes (15 mm, 20mm and 25 mm). The analysis was conducted in two parts; firstly the trajectory of the individual skin-mounted markers were analysed in a 2D ellipse to investigate total displacement of each marker during stance. Secondly, a geometrical analysis was conducted to assess cluster deformation of the hindfoot and midfoot-forefoot segments. Where movement of the markers in the 15 and 20mm conditions were restricted, the marker movement in the 25 mm condition did not exceed the radius at any anatomical location. Despite significant differences in the isotropy index of the medial and lateral calcaneus markers between the 25 mm and barefoot conditions, the differences were due to the effect of footwear on the foot and not a result of the marker wands hitting the shoe upper. In conclusion, the method proposed and results can be used to increase confidence in the representativeness of joint kinematics with respect to in-shoe multi-segment foot motion during walking. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Zhao, Xin; Du, Lin; Xie, Youzhuan; Zhao, Jie
2018-06-01
We used a finite element (FE) analysis to investigate the biomechanical changes caused by transforaminal lumbar interbody fusion (TLIF) at the L4-L5 level by lumbar lordosis (LL) degree. A lumbar FE model (L1-S5) was constructed based on computed tomography scans of a 30-year-old healthy male volunteer (pelvic incidence,= 50°; LL, 52°). We investigated the influence of LL on the biomechanical behavior of the lumbar spine after TLIF in L4-L5 fusion models with 57°, 52°, 47°, and 40° LL. The LL was defined as the angle between the superior end plate of L1 and the superior end plate of S1. A 150-N vertical axial preload was imposed on the superior surface of L3. A 10-N/m moment was simultaneously applied on the L3 superior surface along the radial direction to simulate the 4 basic physiologic motions of flexion, extension, lateral bending, and torsion in the numeric simulations. The range of motion (ROM) and intradiscal pressure (IDP) of L3-L4 were evaluated and compared in the simulated cases. In all motion patterns, the ROM and IDP were both increased after TLIF. In addition, the decrease in lordosis generally increased the ROM and IDP in all motion patterns. This FE analysis indicated that decreased spinal lordosis may evoke overstress of the adjacent segment and increase the risk of the pathologic development of adjacent segment degeneration; thus, adjacent segment degeneration should be considered when planning a spinal fusion procedure. Copyright © 2018. Published by Elsevier Inc.
DOT National Transportation Integrated Search
2017-06-01
The purpose of this study was to evaluate if the Surrogate Safety Assessment Model (SSAM) could be used to assess the safety of a highway segment or an intersection in terms of the number and type of conflicts and to compare the safety effects of mul...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donnelly, H.; Fullwood, R.; Glancy, J.
This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Automated detection of videotaped neonatal seizures of epileptic origin.
Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-06-01
This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.
Fu, Xin; Yuan, Jun
2017-07-24
Coherent x-ray diffraction investigations on Ag five-fold twinned nanowires (FTNWs) have drawn controversial conclusions concerning whether the intrinsic 7.35° angular gap could be compensated homogeneously through phase transformation or inhomogeneously by forming disclination strain field. In those studies, the x-ray techniques only provided an ensemble average of the structural information from all the Ag nanowires. Here, using three-dimensional (3D) electron diffraction mapping approach, we non-destructively explore the cross-sectional strain and the related strain-relief defect structures of an individual Ag FTNW with diameter about 30 nm. The quantitative analysis of the fine structure of intensity distribution combining with kinematic electron diffraction simulation confirms that for such a Ag FTNW, the intrinsic 7.35° angular deficiency results in an inhomogeneous strain field within each single crystalline segment consistent with the disclination model of stress-relief. Moreover, the five crystalline segments are found to be strained differently. Modeling analysis in combination with system energy calculation further indicates that the elastic strain energy within some crystalline segments, could be partially relieved by the creation of stacking fault layers near the twin boundaries. Our study demonstrates that 3D electron diffraction mapping is a powerful tool for the cross-sectional strain analysis of complex 1D nanostructures.
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Off- and Along-Axis Slow Spreading Ridge Segment Characters: Insights From 3d Thermal Modeling
NASA Astrophysics Data System (ADS)
Gac, S.; Tisseau, C.; Dyment, J.
2001-12-01
Many observations along the Mid-Atlantic Ridge segments suggest a correlation between surface characters (length, axial morphology) and the thermal state of the segment. Thibaud et al. (1998) classify segments according to their thermal state: "colder" segments shorter than 30 km show a weak magmatic activity, and "hotter" segments as long as 90 km show a robust magmatic activity. The existence of such a correlation suggests that the thermal structure of a slow spreading ridge segment explains most of the surface observations. Here we test the physical coherence of such an integrated thermal model and evaluate it quantitatively. The different kinds of segment would constitute different phases in a segment evolution, the segment evolving progressively from a "colder" to a "hotter" so to a "colder" state. Here we test the consistency of such an evolution scheme. To test these hypotheses we have developed a 3D numerical model for the thermal structure and evolution of a slow spreading ridge segment. The thermal structure is controlled by the geometry and the dimensions of a permanently hot zone, imposed beneath the segment center, where is simulated the adiabatic ascent of magmatic material. To compare the model with the observations several geophysic quantities which depend on the thermal state are simulated: crustal thickness variations along axis, gravity anomalies (reflecting density variations) and earthquake maximum depth (corresponding to the 750° C isotherm depth). The thermal structure of a particular segment is constrained by comparing the simulated quantities to the real ones. Considering realistic magnetization parameters, the magnetic anomalies generated from the same thermal structure and evolution reproduce the observed magnetic anomaly amplitude variations along the segment. The thermal structures accounting for observations are determined for each kind of segment (from "colder" to "hotter"). The evolution of the thermal structure from the "colder" to the "hotter" segments gives credence to a temporal relationship between the different kinds of segment. The resulting thermal evolution model of slow spreading ridge segments may explain the rhomboedric shapes observed off-axis.
CT-based manual segmentation and evaluation of paranasal sinuses.
Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G
2009-04-01
Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.
TARPARE: a method for selecting target audiences for public health interventions.
Donovan, R J; Egger, G; Francas, M
1999-06-01
This paper presents a model to assist the health promotion practitioner systematically compare and select what might be appropriate target groups when there are a number of segments competing for attention and resources. TARPARE assesses previously identified segments on the following criteria: T: The Total number of persons in the segment; AR: The proportion of At Risk persons in the segment; P: The Persuability of the target audience; A: The Accessibility of the target audience; R: Resources required to meet the needs of the target audience; and E: Equity, social justice considerations. The assessment can be applied qualitatively or can be applied such that scores can be assigned to each segment. Two examples are presented. TARPARE is a useful and flexible model for understanding the various segments in a population of interest and for assessing the potential viability of interventions directed at each segment. The model is particularly useful when there is a need to prioritise segments in terms of available budgets. The model provides a disciplined approach to target selection and forces consideration of what weights should be applied to the different criteria, and how these might vary for different issues or for different objectives. TARPARE also assesses segments in terms of an overall likelihood of optimal impact for each segment. Targeting high scoring segments is likely to lead to greater program success than targeting low scoring segments.
Compositionality in neural control: an interdisciplinary study of scribbling movements in primates
Abeles, Moshe; Diesmann, Markus; Flash, Tamar; Geisel, Theo; Herrmann, Michael; Teicher, Mina
2013-01-01
This article discusses the compositional structure of hand movements by analyzing and modeling neural and behavioral data obtained from experiments where a monkey (Macaca fascicularis) performed scribbling movements induced by a search task. Using geometrically based approaches to movement segmentation, it is shown that the hand trajectories are composed of elementary segments that are primarily parabolic in shape. The segments could be categorized into a small number of classes on the basis of decreasing intra-class variance over the course of training. A separate classification of the neural data employing a hidden Markov model showed a coincidence of the neural states with the behavioral categories. An additional analysis of both types of data by a data mining method provided evidence that the neural activity patterns underlying the behavioral primitives were formed by sets of specific and precise spike patterns. A geometric description of the movement trajectories, together with precise neural timing data indicates a compositional variant of a realistic synfire chain model. This model reproduces the typical shapes and temporal properties of the trajectories; hence the structure and composition of the primitives may reflect meaningful behavior. PMID:24062679
Crash energy absorption of two-segment crash box with holes under frontal load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choiron, Moch Agus, E-mail: agus-choiron@ub.ac.id; Sudjito,; Hidayati, Nafisah Arina
Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base.more » Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.« less
Training models of anatomic shape variability
Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang
2008-01-01
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919
Segmentation of the ovine lung in 3D CT Images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.
2004-04-01
Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
Hedonic analysis of the price of UHT-treated milk in Italy.
Bimbo, Francesco; Bonanno, Alessandro; Liu, Xuan; Viscecchia, Rosaria
2016-02-01
The Italian market for UHT milk has been growing thanks to both consumers' interest in products with an extended shelf life and to the lower prices of these products compared with refrigerated, pasteurized milk. However, because the lower prices of UHT milk can hinder producers' margins, manufacturers have introduced new versions of UHT milk products such as lactose-free options, vitamin-enriched products, and milk for infants, with the goal of differentiating their products, escaping the price competition, and gaining higher margins. In this paper, we estimated the contribution of different attributes to UHT milk prices in Italy by using a database of Italian UHT milk sales and a hedonic price model. In our analysis, we considered 2 UHT milk market segments: products for infants and those for the general population. We found premiums varied with the milk's attributes as well as between the segments analyzed: n-3 fatty acids, organic, and added calcium were the most valuable product features in the general population segment, whereas in the infant segment fiber, glass packaging, and the targeting of newborns delivered the highest premiums. Finally, we present recommendations for UHT milk manufacturers. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images
NASA Astrophysics Data System (ADS)
Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.
2009-04-01
Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was validated quantitatively by comparing it with the CO values measured from the volume flow in the pulmonary artery. Relative bias varied between 0 and -17%, where the nominal accuracy of the flow meter is in the order of 10%. Assuming the CO measurements from the flow probe as a gold standard, excellent correlation (r = 0.99) was observed with the CO estimates obtained from image segmentation.
Deformable M-Reps for 3D Medical Image Segmentation.
Pizer, Stephen M; Fletcher, P Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z; Fridman, Yonatan; Fritsch, Daniel S; Gash, Graham; Glotzer, John M; Jiroutek, Michael R; Lu, Conglin; Muller, Keith E; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L
2003-11-01
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models , which define objects at coarse scale by a hierarchy of figures - each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps ), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.
Deformable M-Reps for 3D Medical Image Segmentation
Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.
2013-01-01
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID:23825898
Lou, Jigang; Li, Yuanchao; Wang, Beiyu; Meng, Yang; Wu, Tingkui; Liu, Hao
2017-01-01
Abstract In vitro biomechanical analysis after cervical disc replacement (CDR) with a novel artificial disc prosthesis (mobile core) was conducted and compared with the intact model, simulated fusion, and CDR with a fixed-core prosthesis. The purpose of this experimental study was to analyze the biomechanical changes after CDR with a novel prosthesis and the differences between fixed- and mobile-core prostheses. Six human cadaveric C2–C7 specimens were biomechanically tested sequentially in 4 different spinal models: intact specimens, simulated fusion, CDR with a fixed-core prosthesis (Discover, DePuy), and CDR with a mobile-core prosthesis (Pretic-I, Trauson). Moments up to 2 Nm with a 75 N follower load were applied in flexion–extension, left and right lateral bending, and left and right axial rotation. The total range of motion (ROM), segmental ROM, and adjacent intradiscal pressure (IDP) were calculated and analyzed in 4 different spinal models, as well as the differences between 2 disc prostheses. Compared with the intact specimens, the total ROM, segmental ROM, and IDP at the adjacent segments showed no significant difference after arthroplasty. Moreover, CDR with a mobile-core prosthesis presented a little higher values of target segment (C5/6) and total ROM than CDR with a fixed-core prosthesis (P > .05). Besides, the difference in IDP at C4/5 after CDR with 2 prostheses was without statistical significance in all the directions of motion. However, the IDP at C6/7 after CDR with a mobile-core prosthesis was lower than CDR with a fixed-core prosthesis in flexion, extension, and lateral bending, with significant difference (P < .05), but not under axial rotation. CDR with a novel prosthesis was effective to maintain the ROM at the target segment and did not affect the ROM and IDP at the adjacent segments. Moreover, CDR with a mobile-core prosthesis presented a little higher values of target segment and total ROM, but lower IDP at the inferior adjacent segment than CDR with a fixed-core prosthesis. PMID:29019902
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
A novel content-based active contour model for brain tumor segmentation.
Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal
2012-06-01
Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.
Jun, Min-Ho; Kim, Soochan; Ku, Boncho; Cho, JungHee; Kim, Kahye; Yoo, Ho-Ryong; Kim, Jaeuk U
2018-01-12
We investigated segmental phase angles (PAs) in the four limbs using a multi-frequency bioimpedance analysis (MF-BIA) technique for noninvasively diagnosing diabetes mellitus. We conducted a meal tolerance test (MTT) for 45 diabetic and 45 control subjects stratified by age, sex and body mass index (BMI). HbA1c and the waist-to-hip-circumference ratio (WHR) were measured before meal intake, and we measured the glucose levels and MF-BIA PAs 5 times for 2 hours after meal intake. We employed a t-test to examine the statistical significance and the area under the curve (AUC) of the receiver operating characteristics (ROC) to test the classification accuracy using segmental PAs at 5, 50, and 250 kHz. Segmental PAs were independent of the HbA1c or glucose levels, or their changes caused by the MTT. However, the segmental PAs were good indicators for noninvasively screening diabetes In particular, leg PAs in females and arm PAs in males showed best classification accuracy (AUC = 0.827 for males, AUC = 0.845 for females). Lastly, we introduced the PA at maximum reactance (PAmax), which is independent of measurement frequencies and can be obtained from any MF-BIA device using a Cole-Cole model, thus showing potential as a useful biomarker for diabetes.
A damped oscillator imposes temporal order on posterior gap gene expression in Drosophila.
Verd, Berta; Clark, Erik; Wotton, Karl R; Janssens, Hilde; Jiménez-Guri, Eva; Crombach, Anton; Jaeger, Johannes
2018-02-01
Insects determine their body segments in two different ways. Short-germband insects, such as the flour beetle Tribolium castaneum, use a molecular clock to establish segments sequentially. In contrast, long-germband insects, such as the vinegar fly Drosophila melanogaster, determine all segments simultaneously through a hierarchical cascade of gene regulation. Gap genes constitute the first layer of the Drosophila segmentation gene hierarchy, downstream of maternal gradients such as that of Caudal (Cad). We use data-driven mathematical modelling and phase space analysis to show that shifting gap domains in the posterior half of the Drosophila embryo are an emergent property of a robust damped oscillator mechanism, suggesting that the regulatory dynamics underlying long- and short-germband segmentation are much more similar than previously thought. In Tribolium, Cad has been proposed to modulate the frequency of the segmentation oscillator. Surprisingly, our simulations and experiments show that the shift rate of posterior gap domains is independent of maternal Cad levels in Drosophila. Our results suggest a novel evolutionary scenario for the short- to long-germband transition and help explain why this transition occurred convergently multiple times during the radiation of the holometabolan insects.
A damped oscillator imposes temporal order on posterior gap gene expression in Drosophila
Verd, Berta; Clark, Erik; Wotton, Karl R.; Janssens, Hilde; Jiménez-Guri, Eva; Crombach, Anton
2018-01-01
Insects determine their body segments in two different ways. Short-germband insects, such as the flour beetle Tribolium castaneum, use a molecular clock to establish segments sequentially. In contrast, long-germband insects, such as the vinegar fly Drosophila melanogaster, determine all segments simultaneously through a hierarchical cascade of gene regulation. Gap genes constitute the first layer of the Drosophila segmentation gene hierarchy, downstream of maternal gradients such as that of Caudal (Cad). We use data-driven mathematical modelling and phase space analysis to show that shifting gap domains in the posterior half of the Drosophila embryo are an emergent property of a robust damped oscillator mechanism, suggesting that the regulatory dynamics underlying long- and short-germband segmentation are much more similar than previously thought. In Tribolium, Cad has been proposed to modulate the frequency of the segmentation oscillator. Surprisingly, our simulations and experiments show that the shift rate of posterior gap domains is independent of maternal Cad levels in Drosophila. Our results suggest a novel evolutionary scenario for the short- to long-germband transition and help explain why this transition occurred convergently multiple times during the radiation of the holometabolan insects. PMID:29451884
A JOINT FRAMEWORK FOR 4D SEGMENTATION AND ESTIMATION OF SMOOTH TEMPORAL APPEARANCE CHANGES.
Gao, Yang; Prastawa, Marcel; Styner, Martin; Piven, Joseph; Gerig, Guido
2014-04-01
Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.
Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang
2013-01-01
Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary ( DDD ) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First , minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second , linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third , instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.
Time-independent Anisotropic Plastic Behavior by Mechanical Subelement Models
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1983-01-01
The paper describes a procedure for modelling the anisotropic elastic-plastic behavior of metals in plane stress state by the mechanical sub-layer model. In this model the stress-strain curves along the longitudinal and transverse directions are represented by short smooth segments which are considered as piecewise linear for simplicity. The model is incorporated in a finite element analysis program which is based on the assumed stress hybrid element and the iscoplasticity-theory.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Martínez, Fabio; Romero, Eduardo; Dréan, Gaël; Simon, Antoine; Haigron, Pascal; De Crevoisier, Renaud; Acosta, Oscar
2014-01-01
Accurate segmentation of the prostate and organs at risk in computed tomography (CT) images is a crucial step for radiotherapy (RT) planning. Manual segmentation, as performed nowadays, is a time consuming process and prone to errors due to the a high intra- and inter-expert variability. This paper introduces a new automatic method for prostate, rectum and bladder segmentation in planning CT using a geometrical shape model under a Bayesian framework. A set of prior organ shapes are first built by applying Principal Component Analysis (PCA) to a population of manually delineated CT images. Then, for a given individual, the most similar shape is obtained by mapping a set of multi-scale edge observations to the space of organs with a customized likelihood function. Finally, the selected shape is locally deformed to adjust the edges of each organ. Experiments were performed with real data from a population of 116 patients treated for prostate cancer. The data set was split in training and test groups, with 30 and 86 patients, respectively. Results show that the method produces competitive segmentations w.r.t standard methods (Averaged Dice = 0.91 for prostate, 0.94 for bladder, 0.89 for Rectum) and outperforms the majority-vote multi-atlas approaches (using rigid registration, free-form deformation (FFD) and the demons algorithm) PMID:24594798
Cunningham, Charles E; Rimas, Heather; Chen, Yvonne; Deal, Ken; McGrath, Patrick; Lingley-Pottie, Patricia; Reid, Graham J; Lipman, Ellen; Corkum, Penny
2015-01-01
Using a discrete choice conjoint experiment, we explored the design of parenting programs as an interim strategy for families waiting for children's mental health treatment. Latent class analysis yielded 4 segments with different design preferences. Simulations predicted the Fast-Paced Personal Contact segment, 22.1% of the sample, would prefer weekly therapist-led parenting groups. The Moderate-Paced Personal Contact segment (24.7%) preferred twice-monthly therapist-led parenting groups with twice-monthly lessons. The Moderate-Paced E-Contact segment (36.3%), preferred weekly to twice-monthly contacts, e-mail networking, and a program combining therapist-led sessions with the support of a computerized telephone e-coach. The Slow-Paced E-Contact segment (16.9%) preferred an approach combining monthly therapist-led sessions, e-coaching, and e-mail networking with other parents. Simulations predicted 45.3% of parents would utilize an option combining 5 therapist coaching calls with 5 e-coaching calls, a model that could reduce costs and extend the availability of interim services. Although 41.0% preferred weekly pacing, 58% were predicted to choose an interim parenting service conducted at a twice-monthly to monthly pace. The results of this study suggest that developing interim services reflecting parental preferences requires a choice of formats that includes parenting groups, telephone-coached distance programs, and e-coaching options conducted at a flexible pace.
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
A geometric level set model for ultrasounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarti, A.; Malladi, R.
We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.
Segmentation of vessels cluttered with cells using a physics based model.
Schmugge, Stephen J; Keller, Steve; Nguyen, Nhat; Souvenir, Richard; Huynh, Toan; Clemens, Mark; Shin, Min C
2008-01-01
Segmentation of vessels in biomedical images is important as it can provide insight into analysis of vascular morphology, topology and is required for kinetic analysis of flow velocity and vessel permeability. Intravital microscopy is a powerful tool as it enables in vivo imaging of both vasculature and circulating cells. However, the analysis of vasculature in those images is difficult due to the presence of cells and their image gradient. In this paper, we provide a novel method of segmenting vessels with a high level of cell related clutter. A set of virtual point pairs ("vessel probes") are moved reacting to forces including Vessel Vector Flow (VVF) and Vessel Boundary Vector Flow (VBVF) forces. Incorporating the cell detection, the VVF force attracts the probes toward the vessel, while the VBVF force attracts the virtual points of the probes to localize the vessel boundary without being distracted by the image features of the cells. The vessel probes are moved according to Newtonian Physics reacting to the net of forces applied on them. We demonstrate the results on a set of five real in vivo images of liver vasculature cluttered by white blood cells. When compared against the ground truth prepared by the technician, the Root Mean Squared Error (RMSE) of segmentation with VVF and VBVF was 55% lower than the method without VVF and VBVF.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
Shape priors for segmentation of the cervix region within uterine cervix images
NASA Astrophysics Data System (ADS)
Lotenberg, Shelly; Gordon, Shiri; Greenspan, Hayit
2008-03-01
The work focuses on a unique medical repository of digital Uterine Cervix images ("Cervigrams") collected by the National Cancer Institute (NCI), National Institute of Health, in longitudinal multi-year studies. NCI together with the National Library of Medicine is developing a unique web-based database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Tools are needed for the automated analysis of the cervigram content to support the cancer research. In recent works, a multi-stage automated system for segmenting and labeling regions of medical and anatomical interest within the cervigrams was developed. The current paper concentrates on incorporating prior-shape information in the cervix region segmentation task. In accordance with the fact that human experts mark the cervix region as circular or elliptical, two shape models (and corresponding methods) are suggested. The shape models are embedded within an active contour framework that relies on image features. Experiments indicate that incorporation of the prior shape information augments previous results.
Segmenting words from natural speech: subsegmental variation in segmental cues.
Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric
2010-06-01
Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.
Deep convolutional neural network for prostate MR segmentation
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei
2017-03-01
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.
Hao, J T; Li, M L; Tang, F L
2008-01-01
Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.
Fast Appearance Modeling for Automatic Primary Video Object Segmentation.
Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong
2016-02-01
Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.
2006-01-01
The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.
The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim
2008-01-01
Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.
Structural analysis of vibroacoustical processes
NASA Technical Reports Server (NTRS)
Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.
1973-01-01
The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank
2012-01-01
Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…
2014-10-01
histology, and microCT analysis. In the current phase of work he will receive more specialized ` training and orientation to microCT analysis...fibrous connective tissue. • Performed histology on goat autogenous bone graft which demonstrated that the quantity and quality of cancellous bone graft
Wiggin, Timothy D.; Peck, Jack H.; Masino, Mark A.
2014-01-01
The cellular and network basis for most vertebrate locomotor central pattern generators (CPGs) is incompletely characterized, but organizational models based on known CPG architectures have been proposed. Segmental models propose that each spinal segment contains a circuit that controls local coordination and sends longer projections to coordinate activity between segments. Unsegmented/continuous models propose that patterned motor output is driven by gradients of neurons and synapses that do not have segmental boundaries. We tested these ideas in the larval zebrafish, an animal that swims in discrete episodes, each of which is composed of coordinated motor bursts that progress rostrocaudally and alternate from side to side. We perturbed the spinal cord using spinal transections or strychnine application and measured the effect on fictive motor output. Spinal transections eliminated episode structure, and reduced both rostrocaudal and side-to-side coordination. Preparations with fewer intact segments were more severely affected, and preparations consisting of midbody and caudal segments were more severely affected than those consisting of rostral segments. In reduced preparations with the same number of intact spinal segments, side-to-side coordination was more severely disrupted than rostrocaudal coordination. Reducing glycine receptor signaling with strychnine reversibly disrupted both rostrocaudal and side-to-side coordination in spinalized larvae without disrupting episodic structure. Both spinal transection and strychnine decreased the stability of the motor rhythm, but this effect was not causal in reducing coordination. These results are inconsistent with a segmented model of the spinal cord and are better explained by a continuous model in which motor neuron coordination is controlled by segment-spanning microcircuits. PMID:25275377
A new medical image segmentation model based on fractional order differentiation and level set
NASA Astrophysics Data System (ADS)
Chen, Bo; Huang, Shan; Xie, Feifei; Li, Lihong; Chen, Wensheng; Liang, Zhengrong
2018-03-01
Segmenting medical images is still a challenging task for both traditional local and global methods because the image intensity inhomogeneous. In this paper, two contributions are made: (i) on the one hand, a new hybrid model is proposed for medical image segmentation, which is built based on fractional order differentiation, level set description and curve evolution; and (ii) on the other hand, three popular definitions of Fourier-domain, Grünwald-Letnikov (G-L) and Riemann-Liouville (R-L) fractional order differentiation are investigated and compared through experimental results. Because of the merits of enhancing high frequency features of images and preserving low frequency features of images in a nonlinear manner by the fractional order differentiation definitions, one fractional order differentiation definition is used in our hybrid model to perform segmentation of inhomogeneous images. The proposed hybrid model also integrates fractional order differentiation, fractional order gradient magnitude and difference image information. The widely-used dice similarity coefficient metric is employed to evaluate quantitatively the segmentation results. Firstly, experimental results demonstrated that a slight difference exists among the three expressions of Fourier-domain, G-L, RL fractional order differentiation. This outcome supports our selection of one of the three definitions in our hybrid model. Secondly, further experiments were performed for comparison between our hybrid segmentation model and other existing segmentation models. A noticeable gain was seen by our hybrid model in segmenting intensity inhomogeneous images.
Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks
2016-08-11
Segment-Fixed Priority Scheduling for Self-Suspending Real -Time Tasks Junsung Kim, Department of Electrical and Computer Engineering, Carnegie...4 2.1 Application of a Multi-Segment Self-Suspending Real -Time Task Model ............................. 5 3 Fixed Priority Scheduling...1 Figure 2: A multi-segment self-suspending real -time task model
A univocal definition of the neuronal soma morphology using Gaussian mixture models.
Luengo-Sanchez, Sergio; Bielza, Concha; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Larrañaga, Pedro
2015-01-01
The definition of the soma is fuzzy, as there is no clear line demarcating the soma of the labeled neurons and the origin of the dendrites and axon. Thus, the morphometric analysis of the neuronal soma is highly subjective. In this paper, we provide a mathematical definition and an automatic segmentation method to delimit the neuronal soma. We applied this method to the characterization of pyramidal cells, which are the most abundant neurons in the cerebral cortex. Since there are no benchmarks with which to compare the proposed procedure, we validated the goodness of this automatic segmentation method against manual segmentation by neuroanatomists to set up a framework for comparison. We concluded that there were no significant differences between automatically and manually segmented somata, i.e., the proposed procedure segments the neurons similarly to how a neuroanatomist does. It also provides univocal, justifiable and objective cutoffs. Thus, this study is a means of characterizing pyramidal neurons in order to objectively compare the morphometry of the somata of these neurons in different cortical areas and species.
Transferability and robustness of real-time freeway crash risk assessment.
Shew, Cameron; Pande, Anurag; Nuworsoo, Cornelius
2013-09-01
This study examines the data from single loop detectors on northbound (NB) US-101 in San Jose, California to estimate real-time crash risk assessment models. The classification tree and neural network based crash risk assessment models developed with data from NB US-101 are applied to data from the same freeway, as well as to the data from nearby segments of the SB US-101, NB I-880, and SB I-880 corridors. The performance of crash risk assessment models on these nearby segments is the focus of this research. The model applications show that it is in fact possible to use the same model for multiple freeways, as the underlying relationships between traffic data and crash risk remain similar. The framework provided here may be helpful to authorities for freeway segments with newly installed traffic surveillance apparatuses, since the real-time crash risk assessment models from nearby freeways with existing infrastructure would be able to provide a reasonable estimate of crash risk. The robustness of the model output is also assessed by location, time of day, and day of week. The analysis shows that on some locations the models may require further learning due to higher than expected false positive (e.g., the I-680/I-280 interchange on US-101 NB) or false negative rates. The approach for post-processing the results from the model provides ideas to refine the model prior to or during the implementation. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Toda, S.; Ando, R.; Yamashina, T.; Inoue, H.; Sunarjo
2010-04-01
On 2007 March 6, an earthquake doublet occurred along the Sumatran fault, Indonesia. The epicentres were located near Padang Panjang, central Sumatra, Indonesia. The first earthquake, with a moment magnitude (Mw) of 6.4, occurred at 03:49 UTC and was followed two hours later (05:49 UTC) by an earthquake of similar size (Mw = 6.3). We studied the earthquake doublet by a waveform inversion analysis using data from a broadband seismograph network in Indonesia (JISNET). The focal mechanisms of the two earthquakes indicate almost identical right-lateral strike-slip faults, consistent with the geometry of the Sumatran fault. Both earthquakes nucleated below the northern end of Lake Singkarak, which is in a pull-apart basin between the Sumani and Sianok segments of the Sumatran fault system, but the earthquakes ruptured different fault segments. The first earthquake occurred along the southern Sumani segment and its rupture propagated southeastward, whereas the second one ruptured the northern Sianok segment northwestward. Along these fault segments, earthquake doublets, in which the two adjacent fault segments rupture one after the other, have occurred repeatedly. We investigated the state of stress at a segment boundary of a fault system based on the Coulomb stress changes. The stress on faults increases during interseismic periods and is released by faulting. At a segment boundary, on the other hand, the stress increases both interseismically and coseismically, and may not be released unless new fractures are created. Accordingly, ruptures may tend to initiate at a pull-apart basin. When an earthquake occurs on one of the fault segments, the stress increases coseismically around the basin. The stress changes caused by that earthquake may trigger a rupture on the other segment after a short time interval. We also examined the mechanism of the delayed rupture based on a theory of a fluid-saturated poroelastic medium and dynamic rupture simulations incorporating a rheological velocity hardening effect. These models of the delayed rupture can qualitatively explain the observations, but further studies, especially based on the rheological effect, are required for quantitative studies.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Zhang, Zhijun; Zhu, Meihua; Ashraf, Muhammad; Broberg, Craig S; Sahn, David J; Song, Xubo
2014-12-01
Quantitative analysis of right ventricle (RV) motion is important for study of the mechanism of congenital and acquired diseases. Unlike left ventricle (LV), motion estimation of RV is more difficult because of its complex shape and thin myocardium. Although attempts of finite element models on MR images and speckle tracking on echocardiography have shown promising results on RV strain analysis, these methods can be improved since the temporal smoothness of the motion is not considered. The authors have proposed a temporally diffeomorphic motion estimation method in which a spatiotemporal transformation is estimated by optimization of a registration energy functional of the velocity field in their earlier work. The proposed motion estimation method is a fully automatic process for general image sequences. The authors apply the method by combining with a semiautomatic myocardium segmentation method to the RV strain analysis of three-dimensional (3D) echocardiographic sequences of five open-chest pigs under different steady states. The authors compare the peak two-point strains derived by their method with those estimated from the sonomicrometry, the results show that they have high correlation. The motion of the right ventricular free wall is studied by using segmental strains. The baseline sequence results show that the segmental strains in their methods are consistent with results obtained by other image modalities such as MRI. The image sequences of pacing steady states show that segments with the largest strain variation coincide with the pacing sites. The high correlation of the peak two-point strains of their method and sonomicrometry under different steady states demonstrates that their RV motion estimation has high accuracy. The closeness of the segmental strain of their method to those from MRI shows the feasibility of their method in the study of RV function by using 3D echocardiography. The strain analysis of the pacing steady states shows the potential utility of their method in study on RV diseases.
Interactive 3D segmentation using connected orthogonal contours.
de Bruin, P W; Dercksen, V J; Post, F H; Vossepoel, A M; Streekstra, G J; Vos, F M
2005-05-01
This paper describes a new method for interactive segmentation that is based on cross-sectional design and 3D modelling. The method represents a 3D model by a set of connected contours that are planar and orthogonal. Planar contours overlayed on image data are easily manipulated and linked contours reduce the amount of user interaction.1 This method solves the contour-to-contour correspondence problem and can capture extrema of objects in a more flexible way than manual segmentation of a stack of 2D images. The resulting 3D model is guaranteed to be free of geometric and topological errors. We show that manual segmentation using connected orthogonal contours has great advantages over conventional manual segmentation. Furthermore, the method provides effective feedback and control for creating an initial model for, and control and steering of, (semi-)automatic segmentation methods.
Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien
2010-06-01
The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.
2015-12-01
found with Tukey’s HSD post hoc analysis. Several target genes such as Oct4, Sox2, TGFB, and Col1A1 were generally up-regulated in all sections. In...expression analysis from the Aim 1 samples presented several upregulated target genes such as Oct4, Sox2, TGFB, and Col1A1 in all sections. No...TGFB, and Col1A1 . • Data from cellular analysis, histology, gene expression analysis and microCT are being assembled for the predictive model
New geometric design consistency model based on operating speed profiles for road safety evaluation.
Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo
2013-12-01
To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation
Zhang, Rui; Zhu, Shiping; Zhou, Qin
2016-01-01
Infrared image segmentation is a challenging topic because infrared images are characterized by high noise, low contrast, and weak edges. Active contour models, especially gradient vector flow, have several advantages in terms of infrared image segmentation. However, the GVF (Gradient Vector Flow) model also has some drawbacks including a dilemma between noise smoothing and weak edge protection, which decrease the effect of infrared image segmentation significantly. In order to solve this problem, we propose a novel generalized gradient vector flow snakes model combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. We also adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. Experimental results and comparisons against other methods indicate that our proposed snakes model owns better ability in terms of infrared image segmentation than other snakes models. PMID:27775660
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
Thermo-Mechanical Analysis of a Single-Pass Weld Overlay and Girth Welding in Lined Pipe
NASA Astrophysics Data System (ADS)
Obeid, Obeid; Alfano, Giulio; Bahai, Hamid
2017-08-01
The paper presents a nonlinear heat-transfer and mechanical finite-element (FE) analyses of a two-pass welding process of two segments of lined pipe made of a SUS304 stainless steel liner and a C-Mn steel pipe. The two passes consist of the single-pass overlay welding (inner lap weld) of the liner with the C-Mn steel pipe for each segment and the single-pass girth welding (outer butt weld) of the two segments. A distributed power density of the moving welding torch and a nonlinear heat-transfer coefficient accounting for both radiation and convection have been used in the analysis and implemented in user subroutines for the FE code ABAQUS. The modeling procedure has been validated against previously published experimental results for stainless steel and carbon steel welding separately. The model has been then used to determine the isotherms induced by the weld overlay and the girth welding and to clarify their influence on the transient temperature field and residual stress in the lined pipe. Furthermore, the influence of the cooling time between weld overlay and girth welding and of the welding speed have been examined thermally and mechanically as they are key factors that can affect the quality of lined pipe welding.
Mining of Business-Oriented Conversations at a Call Center
NASA Astrophysics Data System (ADS)
Takeuchi, Hironori; Nasukawa, Tetsuya; Watanabe, Hideo
Recently it has become feasible to transcribe textual records from telephone conversations at call centers by using automatic speech recognition. In this research, we extended a text mining system for call summary records and constructed a conversation mining system for the business-oriented conversations at the call center. To acquire useful business insights from the conversational data through the text mining system, it is critical to identify appropriate textual segments and expressions as the viewpoints to focus on. In the analysis of call summary data using a text mining system, some experts defined the viewpoints for the analysis by looking at some sample records and by preparing the dictionaries based on frequent keywords in the sample dataset. However with conversations it is difficult to identify such viewpoints manually and in advance because the target data consists of complete transcripts that are often lengthy and redundant. In this research, we defined a model of the business-oriented conversations and proposed a mining method to identify segments that have impacts on the outcomes of the conversations and can then extract useful expressions in each of these identified segments. In the experiment, we processed the real datasets from a car rental service center and constructed a mining system. With this system, we show the effectiveness of the method based on the defined conversation model.
Liver CT image processing: a short introduction of the technical elements.
Masutani, Y; Uozumi, K; Akahane, Masaaki; Ohtomo, Kuni
2006-05-01
In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed.
Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Li, Jianyi; Huang, Wenhua
2016-01-01
Hepatic segment anatomy is difficult for medical students to learn. Three-dimensional visualization (3DV) is a useful tool in anatomy teaching, but current models do not capture haptic qualities. However, three-dimensional printing (3DP) can produce highly accurate complex physical models. Therefore, in this study we aimed to develop a novel 3DP hepatic segment model and compare the teaching effectiveness of a 3DV model, a 3DP model, and a traditional anatomical atlas. A healthy candidate (female, 50-years old) was recruited and scanned with computed tomography. After three-dimensional (3D) reconstruction, the computed 3D images of the hepatic structures were obtained. The parenchyma model was divided into 8 hepatic segments to produce the 3DV hepatic segment model. The computed 3DP model was designed by removing the surrounding parenchyma and leaving the segmental partitions. Then, 6 experts evaluated the 3DV and 3DP models using a 5-point Likert scale. A randomized controlled trial was conducted to evaluate the educational effectiveness of these models compared with that of the traditional anatomical atlas. The 3DP model successfully displayed the hepatic segment structures with partitions. All experts agreed or strongly agreed that the 3D models provided good realism for anatomical instruction, with no significant differences between the 3DV and 3DP models in each index (p > 0.05). Additionally, the teaching effects show that the 3DV and 3DP models were significantly better than traditional anatomical atlas in the first and second examinations (p < 0.05). Between the first and second examinations, only the traditional method group had significant declines (p < 0.05). A novel 3DP hepatic segment model was successfully developed. Both the 3DV and 3DP models could improve anatomy teaching significantly. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Zhong, Chunyan; Guo, Yanli; Huang, Haiyun; Tan, Liwen; Wu, Yi; Wang, Wenting
2013-01-01
To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE). Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG). 3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P < 0.05) and in the mild-to-moderate stenosis group (P < 0.05). These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD.
Shah, Rahman; Berzingi, Chalak; Mumtaz, Mubashir; Jasper, John B; Goswami, Rohan; Morsy, Mohamed S; Ramanathan, Kodangudi B; Rao, Sunil V
2016-11-15
Several recent randomized controlled trials (RCTs) demonstrated better outcomes with multivessel complete revascularization (CR) than with infarct-related artery-only revascularization (IRA-OR) in patients with ST-segment elevation myocardial infarction. It is unclear whether CR should be performed during the index procedure (IP) at the time of primary percutaneous coronary intervention (PCI) or as a staged procedure (SP). Therefore, we performed a pairwise meta-analysis using a random-effects model and network meta-analysis using mixed-treatment comparison models to compare the efficacies of 3 revascularization strategies (IRA-OR, CR-IP, and CR-SP). Scientific databases and websites were searched to find RCTs. Data from 9 RCTs involving 2,176 patients were included. In mixed-comparison models, CR-IP decreased the risk of major adverse cardiac events (MACEs; odds ratio [OR] 0.36, 95% CI 0.25 to 0.54), recurrent myocardial infarction (MI; OR 0.50, 95% CI 0.24 to 0.91), revascularization (OR 0.24, 95% CI 0.15 to 0.38), and cardiovascular (CV) mortality (OR 0.44, 95% CI 0.20 to 0.87). However, only the rates of MACEs, MI, and CV mortality were lower with CR-SP than with IRA-OR. Similarly, in direct-comparison meta-analysis, the risk of MI was 66% lower with CR-IP than with IRA-OR, but this advantage was not seen with CR-SP. There were no differences in all-cause mortality between the 3 revascularization strategies. In conclusion, this meta-analysis shows that in patients with ST-segment elevation myocardial infarction and multivessel coronary artery disease, CR either during primary PCI or as an SP results in lower occurrences of MACE, revascularization, and CV mortality than IRA-OR. CR performed during primary PCI also results in lower rates of recurrent MI and seems the most efficacious revascularization strategy of the 3. Published by Elsevier Inc.
Uterus segmentation in dynamic MRI using LBP texture descriptors
NASA Astrophysics Data System (ADS)
Namias, R.; Bellemare, M.-E.; Rahim, M.; Pirró, N.
2014-03-01
Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.
Cha, Jungwon; Farhangi, Mohammad Mehdi; Dunlap, Neal; Amini, Amir A
2018-01-01
We have developed a robust tool for performing volumetric and temporal analysis of nodules from respiratory gated four-dimensional (4D) CT. The method could prove useful in IMRT of lung cancer. We modified the conventional graph-cuts method by adding an adaptive shape prior as well as motion information within a signed distance function representation to permit more accurate and automated segmentation and tracking of lung nodules in 4D CT data. Active shape models (ASM) with signed distance function were used to capture the shape prior information, preventing unwanted surrounding tissues from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend three-dimensional (3D) segmentation to 4D by warping a prior shape model through time. The algorithm has been applied to segmentation of well-circumscribed, vascularized, and juxtapleural lung nodules from respiratory gated CT data. In all cases, 4D segmentation and tracking for five phases of high-resolution CT data took approximately 10 min on a PC workstation with AMD Phenom II and 32 GB of memory. The method was trained based on 500 breath-held 3D CT data from the LIDC data base and was tested on 17 4D lung nodule CT datasets consisting of 85 volumetric frames. The validation tests resulted in an average Dice Similarity Coefficient (DSC) = 0.68 for all test data. An important by-product of the method is quantitative volume measurement from 4D CT from end-inspiration to end-expiration which will also have important diagnostic value. The algorithm performs robust segmentation of lung nodules from 4D CT data. Signed distance ASM provides the shape prior information which based on the iterative graph-cuts framework is adaptively refined to best fit the input data, preventing unwanted surrounding tissue from merging with the segmented object. © 2017 American Association of Physicists in Medicine.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Segmentation and intensity estimation of microarray images using a gamma-t mixture model.
Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J
2007-02-15
We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
To generate a finite element model of human thorax using the VCH dataset
NASA Astrophysics Data System (ADS)
Shi, Hui; Liu, Qian
2009-10-01
Purpose: To generate a three-dimensional (3D) finite element (FE) model of human thorax which may provide the basis of biomechanics simulation for the study of design effect and mechanism of safety belt when vehicle collision. Methods: Using manually or semi-manually segmented method, the interested area can be segmented from the VCH (Visible Chinese Human) dataset. The 3D surface model of thorax is visualized by using VTK (Visualization Toolkit) and further translated into (Stereo Lithography) STL format, which approximates the geometry of solid model by representing the boundaries with triangular facets. The data in STL format need to be normalized into NURBS surfaces and IGES format using software such as Geomagic Studio to provide archetype for reverse engineering. The 3D FE model was established using Ansys software. Results: The generated 3D FE model was an integrated thorax model which could reproduce human's complicated structure morphology including clavicle, ribs, spine and sternum. It was consisted of 1 044 179 elements in total. Conclusions: Compared with the previous thorax model, this FE model enhanced the authenticity and precision of results analysis obviously, which can provide a sound basis for analysis of human thorax biomechanical research. Furthermore, using the method above, we can also establish 3D FE models of some other organizes and tissues utilizing the VCH dataset.
2006-10-01
lead to false positive segmental hair analysis results.13 Due to the increased risk of false positives associated with segmental hair analysis ...to 200 mg of hair (to allow confirmation testing). 7 The segments are typically washed to remove external contaminants and the chemicals in the hair ...further confirmation. The method overcomes the false positives associated with traditional segmental hair analysis such. By measuring the
Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans.
Reda, Fitsum A; Noble, Jack H; Rivas, Alejandro; McRackan, Theodore R; Labadie, Robert F; Dawant, Benoit M
2011-10-01
Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
Logistic Stick-Breaking Process
Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.
2013-01-01
A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
NASA Astrophysics Data System (ADS)
Hariri, Maia; Wan, Justin W. L.
2012-02-01
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
Pothrat, Claude; Authier, Guillaume; Viehweger, Elke; Berton, Eric; Rao, Guillaume
2015-06-01
Biomechanical models representing the foot as a single rigid segment are commonly used in clinical or sport evaluations. However, neglecting internal foot movements could lead to significant inaccuracies on ankle joint kinematics. The present study proposed an assessment of 3D ankle kinematic outputs using two distinct biomechanical models and their application in the clinical flat foot case. Results of the Plug in Gait (one segment foot model) and the Oxford Foot Model (multisegment foot model) were compared for normal children (9 participants) and flat feet children (9 participants). Repeated measures of Analysis of Variance have been performed to assess the Foot model and Group effects on ankle joint kinematics. Significant differences were observed between the two models for each group all along the gait cycle. In particular for the flat feet group, opposite results between the Oxford Foot Model and the Plug in Gait were revealed at heelstrike, with the Plug in Gait showing a 4.7° ankle dorsal flexion and 2.7° varus where the Oxford Foot Model showed a 4.8° ankle plantar flexion and 1.6° valgus. Ankle joint kinematics of the flat feet group was more affected by foot modeling than normal group. Foot modeling appeared to have a strong influence on resulting ankle kinematics. Moreover, our findings showed that this influence could vary depending on the population. Studies involving ankle joint kinematic assessment should take foot modeling with caution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Genesis of multipeaked waves of the esophagus: repetitive contractions or motion artifact?
Sampath, Neha J; Bhargava, Valmik; Mittal, Ravinder K
2010-06-01
Multipeaked waves (MPW) in the distal esophagus occur frequently in patients with esophageal spastic motor disorders and diabetes mellitus and are thought to represent repetitive esophageal contractions. We aimed to investigate whether the relative motion between a stationary pressure sensor and contracted peristaltic esophageal segment that moves with respiration leads to the formation of MPW. We mathematically modeled the effect of relative movement between a moving pressure segment and a fixed pressure sensor on the pressure waveform morphology. We conducted retrospective analysis of 100 swallow-induced esophageal contractions in 10 patients, who demonstrated >30% MPW on high-resolution manometry (HRM) during standardized swallows. Finally, using HRM, we determined the effects of suspended breathing and hyperventilation on the waveform morphology in 10 patients prospectively. Modeling revealed that relative movement between a stationary pressure sensor and a moving contracted segment, contraction duration, contraction amplitude, respiratory frequency, and depth of respiration affects the waveform morphology. Retrospective analysis demonstrated a close temporal association with the onset of second and subsequent contractions in MPW with respiratory phase reversals. Numbers of peaks in MPW and respiratory phase reversals were closely related to the duration of contraction. In the prospective study, suspended breathing and hyperventilation resulted in a significant decrease and increase in the MPW frequency as well as the number of peaks within MPW respectively. We conclude that MPW observed during clinical motility studies are not indicative of repetitive esophageal contraction; rather they represent respiration-related movement of the contracted esophageal segment in relation to the stationary pressure sensor.
2012-01-01
Background This study illustrates an evidence-based method for the segmentation analysis of patients that could greatly improve the approach to population-based medicine, by filling a gap in the empirical analysis of this topic. Segmentation facilitates individual patient care in the context of the culture, health status, and the health needs of the entire population to which that patient belongs. Because many health systems are engaged in developing better chronic care management initiatives, patient profiles are critical to understanding whether some patients can move toward effective self-management and can play a central role in determining their own care, which fosters a sense of responsibility for their own health. A review of the literature on patient segmentation provided the background for this research. Method First, we conducted a literature review on patient satisfaction and segmentation to build a survey. Then, we performed 3,461 surveys of outpatient services users. The key structures on which the subjects’ perception of outpatient services was based were extrapolated using principal component factor analysis with varimax rotation. After the factor analysis, segmentation was performed through cluster analysis to better analyze the influence of individual attitudes on the results. Results Four segments were identified through factor and cluster analysis: the “unpretentious,” the “informed and supported,” the “experts” and the “advanced” patients. Their policies and managerial implications are outlined. Conclusions With this research, we provide the following: – a method for profiling patients based on common patient satisfaction surveys that is easily replicable in all health systems and contexts; – a proposal for segments based on the results of a broad-based analysis conducted in the Italian National Health System (INHS). Segments represent profiles of patients requiring different strategies for delivering health services. Their knowledge and analysis might support an effort to build an effective population-based medicine approach. PMID:23256543
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078
Automatic knee cartilage delineation using inheritable segmentation
NASA Astrophysics Data System (ADS)
Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.
2008-03-01
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks
Wang, Changjian; Liu, Xiaohui; Jin, Shiyao
2018-01-01
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Cobb, Stephen C; Joshi, Mukta N; Pomeroy, Robin L
2016-12-01
In-vitro and invasive in-vivo studies have reported relatively independent motion in the medial and lateral forefoot segments during gait. However, most current surface-based models have not defined medial and lateral forefoot or midfoot segments. The purpose of the current study was to determine the reliability of a 7-segment foot model that includes medial and lateral midfoot and forefoot segments during walking gait. Three-dimensional positions of marker clusters located on the leg and 6 foot segments were tracked as 10 participants completed 5 walking trials. To examine the reliability of the foot model, coefficients of multiple correlation (CMC) were calculated across the trials for each participant. Three-dimensional stance time series and range of motion (ROM) during stance were also calculated for each functional articulation. CMCs for all of the functional articulations were ≥ 0.80. Overall, the rearfoot complex (leg-calcaneus segments) was the most reliable articulation and the medial midfoot complex (calcaneus-navicular segments) was the least reliable. With respect to ROM, reliability was greatest for plantarflexion/dorsiflexion and least for abduction/adduction. Further, the stance ROM and time-series patterns results between the current study and previous invasive in-vivo studies that have assessed actual bone motion were generally consistent.
Integrated multidisciplinary analysis of segmented reflector telescopes
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.; Needels, Laura
1992-01-01
The present multidisciplinary telescope-analysis approach, which encompasses thermal, structural, control and optical considerations, is illustrated for the case of an IR telescope in LEO; attention is given to end-to-end evaluations of the effects of mechanical disturbances and thermal gradients in measures of optical performance. Both geometric ray-tracing and surface-to-surface diffraction approximations are used in the telescope's optical model. Also noted is the role played by NASA-JPL's Integrated Modeling of Advanced Optical Systems computation tool, in view of numerical samples.
2012-01-01
Background Metamorphosis in insects transforms the larval into an adult body plan and comprises the destruction and remodeling of larval and the generation of adult tissues. The remodeling of larval into adult muscles promises to be a genetic model for human atrophy since it is associated with dramatic alteration in cell size. Furthermore, muscle development is amenable to 3D in vivo microscopy at high cellular resolution. However, multi-dimensional image acquisition leads to sizeable amounts of data that demand novel approaches in image processing and analysis. Results To handle, visualize and quantify time-lapse datasets recorded in multiple locations, we designed a workflow comprising three major modules. First, the previously introduced TLM-converter concatenates stacks of single time-points. The second module, TLM-2D-Explorer, creates maximum intensity projections for rapid inspection and allows the temporal alignment of multiple datasets. The transition between prepupal and pupal stage serves as reference point to compare datasets of different genotypes or treatments. We demonstrate how the temporal alignment can reveal novel insights into the east gene which is involved in muscle remodeling. The third module, TLM-3D-Segmenter, performs semi-automated segmentation of selected muscle fibers over multiple frames. 3D image segmentation consists of 3 stages. First, the user places a seed into a muscle of a key frame and performs surface detection based on level-set evolution. Second, the surface is propagated to subsequent frames. Third, automated segmentation detects nuclei inside the muscle fiber. The detected surfaces can be used to visualize and quantify the dynamics of cellular remodeling. To estimate the accuracy of our segmentation method, we performed a comparison with a manually created ground truth. Key and predicted frames achieved a performance of 84% and 80%, respectively. Conclusions We describe an analysis pipeline for the efficient handling and analysis of time-series microscopy data that enhances productivity and facilitates the phenotypic characterization of genetic perturbations. Our methodology can easily be scaled up for genome-wide genetic screens using readily available resources for RNAi based gene silencing in Drosophila and other animal models. PMID:23282138
Anomalous toluene transport in model segmented polyurethane-urea/clay nanocomposites.
Rath, Sangram K; Bahadur, Jitendra; Panda, Himanshu S; Sen, Debasis; Patro, T Umasankar; S, Praveen; Patri, Manornajan; Khakhar, Devang V
2018-05-16
The kinetics of liquid solvent sorption in polymeric systems and their nanocomposites often deviate from normal Fickian behaviour. This needs to be understood and interpreted, in terms of their underlying mechanistic origins. In the present study, the results of time dependent toluene sorption measurements in model segmented polyurethane-urea/clay nanocomposites have been analysed at room temperature. The studies revealed pronounced S-shaped sorption curves and unusually higher swelling of the nanocomposites compared to the neat polyurethane-urea matrix. Dynamic mechanical analysis (DMA) and small angle X-ray scattering (SAXS) measurements on the nanocomposites in the dry and liquid toluene saturated state have been carried out. The DMA studies revealed a significant decrease in the α relaxation temperature and storage modulus of the nanocomposites in the swollen state compared to the dry samples. The SAXS results showed that the nanoclay dispersion morphology transformed from intercalation in the dry state to exfoliation in the swollen state and the interdomain distance between hard segments increased upon swelling. Thermodynamic analysis of the Flory-Huggins interaction parameter (χ) of nanocomposite/toluene systems revealed increasingly negative χ values with increased clay loading. These results imply a significant plasticization effect of toluene on the nanocomposites. An interpretation of these data, which relates the abovementioned results, is presented in the framework of differential swelling stress (DSS) induced deviation from Fickian transport characteristics. We expect that these findings and methods may provide new insight into the analysis of the solvent diffusion process in heterogeneous polymers and their nanocomposites.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent
2010-04-01
The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.
Model-based segmentation of the facial nerve and chorda tympani in pediatric CT scans
NASA Astrophysics Data System (ADS)
Reda, Fitsum A.; Noble, Jack H.; Rivas, Alejandro; Labadie, Robert F.; Dawant, Benoit M.
2011-03-01
In image-guided cochlear implant surgery an electrode array is implanted in the cochlea to treat hearing loss. Access to the cochlea is achieved by drilling from the outer skull to the cochlea through the facial recess, a region bounded by the facial nerve and the chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The effectiveness of traditional segmentation approaches to achieve this is severely limited because the facial nerve and chorda are small structures (~1 mm and ~0.3 mm in diameter, respectively) and exhibit poor image contrast. We have recently proposed a technique to achieve this task in adult patients, which relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work we use the same method to segment pediatric scans. We show that substantial differences exist between the anatomy of children and the anatomy of adults, which lead to poor segmentation results when an adult model is used to segment a pediatric volume. We have built a new model for pediatric cases and we have applied it to ten scans. A leave-one-out validation experiment was conducted in which manually segmented structures were compared to automatically segmented structures. The maximum segmentation error was 1 mm. This result indicates that accurate segmentation of the facial nerve and chorda in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.
Pulmonary lobe segmentation based on ridge surface sampling and shape model fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, James C., E-mail: jross@bwh.harvard.edu; Surgical Planning Lab, Brigham and Women's Hospital, Boston, Massachusetts 02215; Laboratory of Mathematics in Imaging, Brigham and Women's Hospital, Boston, Massachusetts 02126
2013-12-15
Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and amore » novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed algorithm is effective for lung lobe segmentation in absence of auxiliary structures such as vessels and airways. The most challenging cases are those with mostly incomplete, absent, or near-absent fissures and in cases with poorly revealed fissures due to high image noise. However, the authors observe good performance even in the majority of these cases.« less
NASA Astrophysics Data System (ADS)
Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.
2012-03-01
Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.
Algorithm research on infrared imaging target extraction based on GAC model
NASA Astrophysics Data System (ADS)
Li, Yingchun; Fan, Youchen; Wang, Yanqing
2016-10-01
Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.
Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.
Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L
2010-07-01
The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.
Ureter tracking and segmentation in CT urography (CTU) using COMPASS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadjiiski, Lubomir, E-mail: lhadjisk@umich.edu; Zick, David; Chan, Heang-Ping
2014-12-15
Purpose: The authors are developing a computerized system for automated segmentation of ureters in CTU, referred to as combined model-guided path-finding analysis and segmentation system (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. Methods: COMPASS consists of three stages: (1) rule-based adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. With institutional review board approval, 79 CTU scans performed with intravenous (IV) contrast material enhancement were collected retrospectively from 79 patient files. One hundred twenty-four ureters were selected from the 79 CTU volumes. On average, the uretersmore » spanned 283 computed tomography slices (range: 116–399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 124 ureters was identified manually to initialize the tracking by COMPASS. In addition, the centerline of each ureter was manually marked and used as reference standard for evaluation of tracking performance. The performance of COMPASS was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter and by estimating the average distance and the average maximum distance between the computer and the manually tracked centerlines. Results: Of the 124 ureters, 120 (97%) were segmented completely (100%), 121 (98%) were segmented through at least 70%, and 123 (99%) were segmented through at least 50% of its length. In comparison, using our previous method, 85 (69%) ureters were segmented completely (100%), 100 (81%) were segmented through at least 70%, and 107 (86%) were segmented at least 50% of its length. With COMPASS, the average distance between the computer and the manually generated centerlines is 0.54 mm, and the average maximum distance is 2.02 mm. With our previous method, the average distance between the centerlines was 0.80 mm, and the average maximum distance was 3.38 mm. The improvements in the ureteral tracking length and both distance measures were statistically significant (p < 0.0001). Conclusions: COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening, and the narrowing of the lumen.« less
A robust and fast active contour model for image segmentation with intensity inhomogeneity
NASA Astrophysics Data System (ADS)
Ding, Keyan; Weng, Guirong
2018-04-01
In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.
Semantic Image Segmentation with Contextual Hierarchical Models.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
2016-05-01
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
NASA Astrophysics Data System (ADS)
Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim
2014-04-01
This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.
NASA Astrophysics Data System (ADS)
Sharp, Andy; Heath, Jennifer; Peterson, Janet
2008-05-01
Consumer grade bioelectric impedance analysis (BIA) instruments measure the body's impedance at 50 kHz, and yield a quick estimate of percent body fat. The frequency dependence of the impedance gives more information about the current pathway and the response of different tissues. This study explores the impedance response of human tissue at a range of frequencies from 0.2 - 102 kHz using a four probe method and probe locations standard for segmental BIA research of the arm. The data at 50 kHz, for a 21 year old healthy Caucasian male (resistance of 180φ±10 and reactance of 33φ±2) is in agreement with previously reported values [1]. The frequency dependence is not consistent with simple circuit models commonly used in evaluating BIA data, and repeatability of measurements is problematic. This research will contribute to a better understanding of the inherent difficulties in estimating body fat using consumer grade BIA devices. [1] Chumlea, William C., Richard N. Baumgartner, and Alex F. Roche. ``Specific resistivity used to estimate fat-free mass from segmental body measures of bioelectrical impedance.'' Am J Clin Nutr 48 (1998): 7-15.
Pomegranate MR images analysis using ACM and FCM algorithms
NASA Astrophysics Data System (ADS)
Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.
2011-10-01
Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.
Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.
Cunningham, Ryan J; Harding, Peter J; Loram, Ian D
2017-02-01
Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.