Sample records for volume method based

  1. Are PCI Service Volumes Associated with 30-Day Mortality? A Population-Based Study from Taiwan.

    PubMed

    Yu, Tsung-Hsien; Chou, Ying-Yi; Wei, Chung-Jen; Tung, Yu-Chi

    2017-11-09

    The volume-outcome relationship has been discussed for over 30 years; however, the findings are inconsistent. This might be due to the heterogeneity of service volume definitions and categorization methods. This study takes percutaneous coronary intervention (PCI) as an example to examine whether the service volume was associated with PCI 30-day mortality, given different service volume definitions and categorization methods. A population-based, cross-sectional multilevel study was conducted. Two definitions of physician and hospital volume were used: (1) the cumulative PCI volume in a previous year before each PCI; (2) the cumulative PCI volume within the study period. The volume was further treated in three ways: (1) a categorical variable based on the American Heart Association's recommendation; (2) a semi-data-driven categorical variable based on k-means clustering algorithm; and (3) a data-driven categorical variable based on the Generalized Additive Model. The results showed that, after adjusting the patient-, physician-, and hospital-level covariates, physician volume was associated inversely with PCI 30-day mortality, but hospital volume was not, no matter which definitions and categorization methods of service volume were applied. Physician volume is negatively associated with PCI 30-day mortality, but the results might vary because of definition and categorization method.

  2. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  3. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  4. Accuracy of cancellous bone volume fraction measured by micro-CT scanning.

    PubMed

    Ding, M; Odgaard, A; Hvid, I

    1999-03-01

    Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.

  5. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  6. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  7. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    PubMed

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

  8. Clinically significant change in stroke volume in pulmonary hypertension.

    PubMed

    van Wolferen, Serge A; van de Veerdonk, Marielle C; Mauritz, Gert-Jan; Jacobs, Wouter; Marcus, J Tim; Marques, Koen M J; Bronzwaer, Jean G F; Heymans, Martijn W; Boonstra, Anco; Postmus, Pieter E; Westerhof, Nico; Vonk Noordegraaf, Anton

    2011-05-01

    Stroke volume is probably the best hemodynamic parameter because it reflects therapeutic changes and contains prognostic information in pulmonary hypertension (PH). Stroke volume directly reflects right ventricular function in response to its load, without the correction of compensatory increased heart rate as is the case for cardiac output. For this reason, stroke volume, which can be measured noninvasively, is an important hemodynamic parameter to monitor during treatment. However, the extent of change in stroke volume that constitutes a clinically significant change is unknown. The aim of this study was to determine the minimal important difference (MID) in stroke volume in PH. One hundred eleven patients were evaluated at baseline and after 1 year of follow-up with a 6-min walk test (6MWT) and cardiac MRI. Using the anchor-based method with 6MWT as the anchor, and the distribution-based method, the MID of stroke volume change could be determined. After 1 year of treatment, there was, on average, a significant increase in stroke volume and 6MWT. The change in stroke volume was related to the change in 6MWT. Using the anchor-based method, an MID of 10 mL in stroke volume was calculated. The distribution-based method resulted in an MID of 8 to 12 mL. Both methods showed that a 10-mL change in stroke volume during follow-up should be considered as clinically relevant. This value can be used to interpret changes in stroke volume during clinical follow-up in PH.

  9. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  10. Partial volume correction of brain perfusion estimates using the inherent signal data of time-resolved arterial spin labeling.

    PubMed

    Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda

    2014-09-01

    Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  12. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  13. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    PubMed

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  14. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  15. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  16. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  17. Determination of void volume in normal phase liquid chromatography.

    PubMed

    Jiang, Ping; Wu, Di; Lucy, Charles A

    2014-01-10

    Void volume is an important fundamental parameter in chromatography. Little prior discussion has focused on the determination of void volume in normal phase liquid chromatography (NPLC). Various methods to estimate the total void volume are compared: pycnometry; minor disturbance method based on injection of weak solvent; tracer pulse method; hold-up volume based on unretained compounds; and accessible volume based on Martin's rule and its descendants. These are applied to NPLC on silica, RingSep and DNAP columns. Pycnometry provides a theoretically maximum value for the total void volume and should be performed at least once for each new column. However, pycnometry does not reflect the volume of adsorbed strong solvent on the stationary phase, and so only yields an accurate void volume for weaker mobile phase conditions. 1,3,5-Tri-t-butyl benzene (TTBB) results in hold-up volumes that are convenient measures of the void volume for all eluent conditions on charge-transfer columns (RingSep and DNAP), but is weakly retained under weak eluent conditions on silica. Injection of the weak mobile phase component (hexane) may be used to determine void volume, but care must be exercised to select the appropriate disturbance feature. Accessible volumes, that are determined using a homologous series, are always biased low, and are not recommended as a measure of the void volume. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Fast multiview three-dimensional reconstruction method using cost volume filtering

    NASA Astrophysics Data System (ADS)

    Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.

    2014-03-01

    As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.

  19. Simplex volume analysis for finding endmembers in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.

    2015-05-01

    Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.

  20. High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.

    PubMed

    Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D

    2018-05-30

    NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Knowledge-based reconstruction for measurement of right ventricular volumes on cardiovascular magnetic resonance images in a mixed population.

    PubMed

    Pieterman, Elise D; Budde, Ricardo P J; Robbers-Visser, Daniëlle; van Domburg, Ron T; Helbing, Willem A

    2017-09-01

    Follow-up of right ventricular performance is important for patients with congenital heart disease. Cardiac magnetic resonance imaging is optimal for this purpose. However, observer-dependency of manual analysis of right ventricular volumes limit its use. Knowledge-based reconstruction is a new semiautomatic analysis tool that uses a database including knowledge of right ventricular shape in various congenital heart diseases. We evaluated whether knowledge-based reconstruction is a good alternative for conventional analysis. To assess the inter- and intra-observer variability and agreement of knowledge-based versus conventional analysis of magnetic resonance right ventricular volumes, analysis was done by two observers in a mixed group of 22 patients with congenital heart disease affecting right ventricular loading conditions (dextro-transposition of the great arteries and right ventricle to pulmonary artery conduit) and a group of 17 healthy children. We used Bland-Altman analysis and coefficient of variation. Comparison between the conventional method and the knowledge-based method showed a systematically higher volume for the latter group. We found an overestimation for end-diastolic volume (bias -40 ± 24 mL, r = .956), end-systolic volume (bias -34 ± 24 mL, r = .943), stroke volume (bias -6 ± 17 mL, r = .735) and an underestimation of ejection fraction (bias 7 ± 7%, r = .671) by knowledge-based reconstruction. The intra-observer variability of knowledge-based reconstruction varied with a coefficient of variation of 9% for end-diastolic volume and 22% for stroke volume. The same trend was noted for inter-observer variability. A systematic difference (overestimation) was noted for right ventricular size as assessed with knowledge-based reconstruction compared with conventional methods for analysis. Observer variability for the new method was comparable to what has been reported for the right ventricle in children and congenital heart disease with conventional analysis. © 2017 Wiley Periodicals, Inc.

  2. A Novel Application for the Cavalieri Principle: A Stereological and Methodological Study

    PubMed Central

    Altunkaynak, Berrin Zuhal; Altunkaynak, Eyup; Unal, Deniz; Unal, Bunyamin

    2009-01-01

    Objective The Cavalieri principle was applied to consecutive pathology sections that were photographed at the same magnification and used to estimate tissue volumes via superimposing a point counting grid on these images. The goal of this study was to perform the Cavalieri method quickly and practically. Materials and Methods In this study, 10 adult female Sprague Dawley rats were used. Brain tissue was removed and sampled both systematically and randomly. Brain volumes were estimated using two different methods. First, all brain slices were scanned with an HP ScanJet 3400C scanner, and their images were shown on a PC monitor. Brain volume was then calculated based on these images. Second, all brain slices were photographed in 10× magnification with a microscope camera, and brain volumes were estimated based on these micrographs. Results There was no statistically significant difference between the volume measurements of the two techniques (P>0.05; Paired Samples t Test). Conclusion This study demonstrates that personal computer scanning of serial tissue sections allows for easy and reliable volume determination based on the Cavalieri method. PMID:25610077

  3. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  4. Which Kind of Provider’s Operation Volumes Matters? Associations between CABG Surgical Site Infection Risk and Hospital and Surgeon Operation Volumes among Medical Centers in Taiwan

    PubMed Central

    Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao

    2015-01-01

    Background Volume-infection relationships have been examined for high-risk surgical procedures, but the conclusions remain controversial. The inconsistency might be due to inaccurate identification of cases of infection and different methods of categorizing service volumes. This study takes coronary artery bypass graft (CABG) surgical site infections (SSIs) as an example to examine whether a relationship exists between operation volumes and SSIs, when different SSIs case identification, definitions and categorization methods of operation volumes were implemented. Methods A population-based cross-sectional multilevel study was conducted. A total of 7,007 patients who received CABG surgery between 2006 and 2008 from19 medical centers in Taiwan were recruited. SSIs associated with CABG surgery were identified using International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9 CM) codes and a Classification and Regression Trees (CART) model. Two definitions of surgeon and hospital operation volumes were used: (1) the cumulative CABG operation volumes within the study period; and (2) the cumulative CABG operation volumes in the previous one year before each CABG surgery. Operation volumes were further treated in three different ways: (1) a continuous variable; (2) a categorical variable based on the quartile; and (3) a data-driven categorical variable based on k-means clustering algorithm. Furthermore, subgroup analysis for comorbidities was also conducted. Results This study showed that hospital volumes were not significantly associated with SSIs, no matter which definitions or categorization methods of operation volume, or SSIs case identification approaches were used. On the contrary, the relationships between surgeon’s volumes varied. Most of the models demonstrated that the low-volume surgeons had higher risk than high-volume surgeons. Conclusion Surgeon volumes were more important than hospital volumes in exploring the relationship between CABG operation volumes and SSIs in Taiwan. However, the relationships were not robust. Definitions and categorization methods of operation volume and correct identification of SSIs are important issues for future research. PMID:26053035

  5. Verification of the tumor volume delineation method using a fixed threshold of peak standardized uptake value.

    PubMed

    Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro

    2017-09-01

    We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.

  6. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  7. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches.

    PubMed

    Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David

    2016-04-01

    Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD and SyN registration methods were four templates and a kernel standard deviation ranging between 5 and 8. The segmentation process using a single-atlas-based method was more robust with DSI values higher than 0.9. From the vantage of muscle volume measurements, the multi-atlas-based strategy provided acceptable results regarding the QF muscle as a whole but highly variable results regarding individual muscle. On the contrary, the performance of the single-atlas-based pipeline for individual muscles was highly comparable to the MSeg, thereby indicating that this method would be adequate for longitudinal tracking of muscle volume changes in healthy subjects. In the present study, we demonstrated that both multi-atlas and single-atlas approaches were relevant for the segmentation of individual muscles of the QF in healthy subjects. Considering muscle volume measurements, the single-atlas method provided promising perspectives regarding longitudinal quantification of individual muscle volumes.

  8. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  9. Calculation of left ventricular volumes and ejection fraction from dynamic cardiac-gated 15O-water PET/CT: 5D-PET.

    PubMed

    Nordström, Jonny; Kero, Tanja; Harms, Hendrik Johannes; Widström, Charles; Flachskampf, Frank A; Sörensen, Jens; Lubberink, Mark

    2017-11-14

    Quantitative measurement of myocardial blood flow (MBF) is of increasing interest in the clinical assessment of patients with suspected coronary artery disease (CAD). 15 O-water positron emission tomography (PET) is considered the gold standard for non-invasive MBF measurements. However, calculation of left ventricular (LV) volumes and ejection fraction (EF) is not possible from standard 15 O-water uptake images. The purpose of the present work was to investigate the possibility of calculating LV volumes and LVEF from cardiac-gated parametric blood volume (V B ) 15 O-water images and from first pass (FP) images. Sixteen patients with mitral or aortic regurgitation underwent an eight-gate dynamic cardiac-gated 15 O-water PET/CT scan and cardiac MRI. V B and FP images were generated for each gate. Calculations of end-systolic volume (ESV), end-diastolic volume (EDV), stroke volume (SV) and LVEF were performed with automatic segmentation of V B and FP images, using commercially available software. LV volumes and LVEF were calculated with surface-, count-, and volume-based methods, and the results were compared with gold standard MRI. Using V B images, high correlations between PET and MRI ESV (r = 0.89, p < 0.001), EDV (r = 0.85, p < 0.001), SV (r = 0.74, p = 0.006) and LVEF (r = 0.72, p = 0.008) were found for the volume-based method. Correlations for FP images were slightly, but not significantly, lower than those for V B images when compared to MRI. Surface- and count-based methods showed no significant difference compared with the volume-based correlations with MRI. The volume-based method showed the best agreement with MRI with no significant difference on average for EDV and LVEF but with an overestimation of values for ESV (14%, p = 0.005) and SV (18%, p = 0.004) when using V B images. Using FP images, none of the parameters showed a significant difference from MRI. Inter-operator repeatability was excellent for all parameters (ICC > 0.86, p < 0.001). Calculation of LV volumes and LVEF from dynamic 15 O-water PET is feasible and shows good correlation with MRI. However, the analysis method is laborious, and future work is needed for more automation to make the method more easily applicable in a clinical setting.

  10. Comparison of five segmentation tools for 18F-fluoro-deoxy-glucose-positron emission tomography-based target volume definition in head and neck cancer.

    PubMed

    Schinagl, Dominic A X; Vogel, Wouter V; Hoffmann, Aswin L; van Dalen, Jorn A; Oyen, Wim J; Kaanders, Johannes H A M

    2007-11-15

    Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with (18)F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.

  11. Rapid Decimation for Direct Volume Rendering

    NASA Technical Reports Server (NTRS)

    Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane

    1997-01-01

    An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.

  12. Boundary fitting based segmentation of fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2015-03-01

    Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.

  13. An automatic method of brain tumor segmentation from MRI volume based on the symmetry of brain and level set method

    NASA Astrophysics Data System (ADS)

    Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su

    2010-02-01

    This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.

  14. An efficicient data structure for three-dimensional vertex based finite volume method

    NASA Astrophysics Data System (ADS)

    Akkurt, Semih; Sahin, Mehmet

    2017-11-01

    A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.

  15. Archimedes Revisited: A Faster, Better, Cheaper Method of Accurately Measuring the Volume of Small Objects

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2005-01-01

    A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…

  16. A comparison of three methods of setting prescribing budgets, using data derived from defined daily dose analyses of historic patterns of use.

    PubMed Central

    Maxwell, M; Howie, J G; Pryde, C J

    1998-01-01

    BACKGROUND: Prescribing matters (particularly budget setting and research into prescribing variation between doctors) have been handicapped by the absence of credible measures of the volume of drugs prescribed. AIM: To use the defined daily dose (DDD) method to study variation in the volume and cost of drugs prescribed across the seven main British National Formulary (BNF) chapters with a view to comparing different methods of setting prescribing budgets. METHOD: Study of one year of prescribing statistics from all 129 general practices in Lothian, covering 808,059 patients: analyses of prescribing statistics for 1995 to define volume and cost/volume of prescribing for one year for 10 groups of practices defined by the age and deprivation status of their patients, for seven BNF chapters; creation of prescribing budgets for 1996 for each individual practice based on the use of target volume and cost statistics; comparison of 1996 DDD-based budgets with those set using the conventional historical approach; and comparison of DDD-based budgets with budgets set using a capitation-based formula derived from local cost/patient information. RESULTS: The volume of drugs prescribed was affected by the age structure of the practices in BNF Chapters 1 (gastrointestinal), 2 (cardiovascular), and 6 (endocrine), and by deprivation structure for BNF Chapters 3 (respiratory) and 4 (central nervous system). Costs per DDD in the major BNF chapters were largely independent of age, deprivation structure, or fundholding status. Capitation and DDD-based budgets were similar to each other, but both differed substantially from historic budgets. One practice in seven gained or lost more than 100,000 Pounds per annum using DDD or capitation budgets compared with historic budgets. The DDD-based budget, but not the capitation-based budget, can be used to set volume-specific prescribing targets. CONCLUSIONS: DDD-based and capitation-based prescribing budgets can be set using a simple explanatory model and generalizable methods. In this study, both differed substantially from historic budgets. DDD budgets could be created to accommodate new prescribing strategies and raised or lowered to reflect local intentions to alter overall prescribing volume or cost targets. We recommend that future work on setting budgets and researching prescribing variations should be based on DDD statistics. PMID:10024703

  17. Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2000-12-01

    Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.

  18. A novel application for the cavalieri principle: a stereological and methodological study.

    PubMed

    Altunkaynak, Berrin Zuhal; Altunkaynak, Eyup; Unal, Deniz; Unal, Bunyamin

    2009-08-01

    The Cavalieri principle was applied to consecutive pathology sections that were photographed at the same magnification and used to estimate tissue volumes via superimposing a point counting grid on these images. The goal of this study was to perform the Cavalieri method quickly and practically. In this study, 10 adult female Sprague Dawley rats were used. Brain tissue was removed and sampled both systematically and randomly. Brain volumes were estimated using two different methods. First, all brain slices were scanned with an HP ScanJet 3400C scanner, and their images were shown on a PC monitor. Brain volume was then calculated based on these images. Second, all brain slices were photographed in 10× magnification with a microscope camera, and brain volumes were estimated based on these micrographs. There was no statistically significant difference between the volume measurements of the two techniques (P>0.05; Paired Samples t Test). This study demonstrates that personal computer scanning of serial tissue sections allows for easy and reliable volume determination based on the Cavalieri method.

  19. Right ventricular volumes assessed by echocardiographic three-dimensional knowledge-based reconstruction compared with magnetic resonance imaging in a clinical setting.

    PubMed

    Neukamm, Christian; Try, Kirsti; Norgård, Gunnar; Brun, Henrik

    2014-01-01

    A technique that uses two-dimensional images to create a knowledge-based, three-dimensional model was tested and compared to magnetic resonance imaging. Measurement of right ventricular volumes and function is important in the follow-up of patients after pulmonary valve replacement. Magnetic resonance imaging is the gold standard for volumetric assessment. Echocardiographic methods have been validated and are attractive alternatives. Thirty patients with tetralogy of Fallot (25 ± 14 years) after pulmonary valve replacement were examined. Magnetic resonance imaging volumetric measurements and echocardiography-based three-dimensional reconstruction were performed. End-diastolic volume, end-systolic volume, and ejection fraction were measured, and the results were compared. Magnetic resonance imaging measurements gave coefficient of variation in the intraobserver study of 3.5, 4.6, and 5.3 and in the interobserver study of 3.6, 5.9, and 6.7 for end-diastolic volume, end-systolic volume, and ejection fraction, respectively. Echocardiographic three-dimensional reconstruction was highly feasible (97%). In the intraobserver study, the corresponding values were 6.0, 7.0, and 8.9 and in the interobserver study 7.4, 10.8, and 13.4. In comparison of the methods, correlations with magnetic resonance imaging were r = 0.91, 0.91, and 0.38, and the corresponding coefficient of variations were 9.4, 10.8, and 14.7. Echocardiography derived volumes (mL/m(2)) were significantly higher than magnetic resonance imaging volumes in end-diastolic volume 13.7 ± 25.6 and in end-systolic volume 9.1 ± 17.0 (both P < .05). The knowledge-based three-dimensional right ventricular volume method was highly feasible. Intra and interobserver variabilities were satisfactory. Agreement with magnetic resonance imaging measurements for volumes was reasonable but unsatisfactory for ejection fraction. Knowledge-based reconstruction may replace magnetic resonance imaging measurements for serial follow-up, whereas magnetic resonance imaging should be used for surgical decision making.

  20. Finite Volume Method for Pricing European Call Option with Regime-switching Volatility

    NASA Astrophysics Data System (ADS)

    Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM

    2018-03-01

    In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.

  1. In vivo estimation of normal amygdala volume from structural MRI scans with anatomical-based segmentation.

    PubMed

    Siozopoulos, Achilleas; Thomaidis, Vasilios; Prassopoulos, Panos; Fiska, Aliki

    2018-02-01

    Literature includes a number of studies using structural MRI (sMRI) to determine the volume of the amygdala, which is modified in various pathologic conditions. The reported values vary widely mainly because of different anatomical approaches to the complex. This study aims at estimating of the normal amygdala volume from sMRI scans using a recent anatomical definition described in a study based on post-mortem material. The amygdala volume has been calculated in 106 healthy subjects, using sMRI and anatomical-based segmentation. The resulting volumes have been analyzed for differences related to hemisphere, sex, and age. The mean amygdalar volume was estimated at 1.42 cm 3 . The mean right amygdala volume has been found larger than the left, but the difference for the raw values was within the limits of the method error. No intersexual differences or age-related alterations have been observed. The study provides a method for determining the boundaries of the amygdala in sMRI scans based on recent anatomical considerations and an estimation of the mean normal amygdala volume from a quite large number of scans for future use in comparative studies.

  2. The adjusting factor method for weight-scaling truckloads of mixed hardwood sawlogs

    Treesearch

    Edward L. Adams

    1976-01-01

    A new method of weight-scaling truckloads of mixed hardwood sawlogs systematically adjusts for changes in the weight/volume ratio of logs coming into a sawmill. It uses a conversion factor based on the running average of weight/volume ratios of randomly selected sample loads. A test of the method indicated that over a period of time the weight-scaled volume should...

  3. Detection and 3D representation of pulmonary air bubbles in HRCT volumes

    NASA Astrophysics Data System (ADS)

    Silva, Jose S.; Silva, Augusto F.; Santos, Beatriz S.; Madeira, Joaquim

    2003-05-01

    Bubble emphysema is a disease characterized by the presence of air bubbles within the lungs. With the purpose of identifying pulmonary air bubbles, two alternative methods were developed, using High Resolution Computer Tomography (HRCT) exams. The search volume is confined to the pulmonary volume through a previously developed pulmonary contour detection algorithm. The first detection method follows a slice by slice approach and uses selection criteria based on the Hounsfield levels, dimensions, shape and localization of the bubbles. Candidate regions that do not exhibit axial coherence along at least two sections are excluded. Intermediate sections are interpolated for a more realistic representation of lungs and bubbles. The second detection method, after the pulmonary volume delimitation, follows a fully 3D approach. A global threshold is applied to the entire lung volume returning candidate regions. 3D morphologic operators are used to remove spurious structures and to circumscribe the bubbles. Bubble representation is accomplished by two alternative methods. The first generates bubble surfaces based on the voxel volumes previously detected; the second method assumes that bubbles are approximately spherical. In order to obtain better 3D representations, fits super-quadrics to bubble volume. The fitting process is based on non-linear least squares optimization method, where a super-quadric is adapted to a regular grid of points defined on each bubble. All methods were applied to real and semi-synthetical data where artificial and randomly deformed bubbles were embedded in the interior of healthy lungs. Quantitative results regarding bubble geometric features are either similar to a priori known values used in simulation tests, or indicate clinically acceptable dimensions and locations when dealing with real data.

  4. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    NASA Astrophysics Data System (ADS)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  5. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  6. GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.

    PubMed

    Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin

    2017-07-01

    Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.

  7. Tutorial for Collecting and Processing Images of Composite Structures to Determine the Fiber Volume Fraction

    NASA Technical Reports Server (NTRS)

    Conklin, Lindsey

    2017-01-01

    Fiber-reinforced composite structures have become more common in aerospace components due to their light weight and structural efficiency. In general, the strength and stiffness of a composite structure are directly related to the fiber volume fraction, which is defined as the fraction of fiber volume to total volume of the composite. The most common method to measure the fiber volume fraction is acid digestion, which is a useful method when the total weight of the composite, the fiber weight, and the total weight can easily be obtained. However, acid digestion is a destructive test, so the material will no longer be available for additional characterization. Acid digestion can also be difficult to machine out specific components of a composite structure with complex geometries. These disadvantages of acid digestion led the author to develop a method to calculate the fiber volume fraction. The developed method uses optical microscopy to calculate the fiber area fraction based on images of the cross section of the composite. The fiber area fraction and fiber volume fraction are understood to be the same, based on the assumption that the shape and size of the fibers are consistent in the depth of the composite. This tutorial explains the developed method for optically determining fiber area fraction performed at NASA Langley Research Center.

  8. SU-F-207-06: CT-Based Assessment of Tumor Volume in Malignant Pleural Mesothelioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qayyum, F; Armato, S; Straus, C

    Purpose: To determine the potential utility of computed tomography (CT) scans in the assessment of physical tumor bulk in malignant pleural mesothelioma patients. Methods: Twenty-eight patients with malignant pleural mesothelioma were used for this study. A CT scan was acquired for each patient prior to surgical resection of the tumor (median time between scan and surgery: 27 days). After surgery, the ex-vivo tumor volume was measured by a pathologist using a water displacement method. Separately, a radiologist identified and outlined the tumor boundary on each CT section that demonstrated tumor. These outlines then were analyzed to determine the total volumemore » of disease present, the number of sections with outlines, and the mean volume of disease per outlined section. Subsets of the initial patient cohort were defined based on these parameters, i.e. cases with at least 30 sections of disease with a mean disease volume of at least 3mL per section. For each subset, the R- squared correlation between CT-based tumor volume and physical ex-vivo tumor volume was calculated. Results: The full cohort of 28 patients yielded a modest correlation between CT-based tumor volume and the ex-vivo tumor volume with an R-squared value of 0.66. In general, as the mean tumor volume per section increased, the correlation of CT-based volume with the physical tumor volume improved substantially. For example, when cases with at least 40 CT sections presenting a mean of at least 2mL of disease per section were evaluated (n=20) the R-squared correlation increased to 0.79. Conclusion: While image-based volumetry for mesothelioma may not generally capture physical tumor volume as accurately as one might expect, there exists a set of conditions in which CT-based volume is highly correlated with the physical tumor volume. SGA receives royalties and licensing fees through the University of Chicago for computer-aided diagnosis technology.« less

  9. The power-proportion method for intracranial volume correction in volumetric imaging analysis.

    PubMed

    Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S

    2014-01-01

    In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.

  10. Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.

    PubMed

    Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M

    2016-12-01

    Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  12. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  13. The Health Services Mobility Study Method of Task Analysis and Curriculum Design. Research Report No. 11. Volume 3: Using the Computer to Develop Job Ladders.

    ERIC Educational Resources Information Center

    Gilpatrick, Eleanor

    This document is volume 3 of a four-volume report which describes the components of the Health Services Mobility Study (HSMS) method of task analysis, job ladder design, and curriculum development. Divided into four chapters, volume 3 is a manual for using HSMS computer based statistical procedures to design job structures and job ladders. Chapter…

  14. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  15. Nonrigid registration of 3D longitudinal optical coherence tomography volumes with choroidal neovascularization

    NASA Astrophysics Data System (ADS)

    Wei, Qiangding; Shi, Fei; Zhu, Weifang; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2017-02-01

    In this paper, we propose a 3D registration method for retinal optical coherence tomography (OCT) volumes. The proposed method consists of five main steps: First, a projection image of the 3D OCT scan is created. Second, the vessel enhancement filter is applied on the projection image to detect vessel shadow. Third, landmark points are extracted based on both vessel positions and layer information. Fourth, the coherent point drift method is used to align retinal OCT volumes. Finally, a nonrigid B-spline-based registration method is applied to find the optimal transform to match the data. We applied this registration method on 15 3D OCT scans of patients with Choroidal Neovascularization (CNV). The Dice coefficients (DSC) between layers are greatly improved after applying the nonrigid registration.

  16. Measurement of the uterus and gestation sac by ultrasound in early normal and abnormal pregnancy.

    PubMed

    Chandra, M; Evans, L J; Duff, G B

    1981-01-14

    Uterine volumes measured by two different ultrasonic methods, and gestation sac volumes in early normal pregnancy are reported. The results obtained for uterine volume measurements are compared. Methods using measurements obtained from only a longitudinal scan were simpler but slightly less accurate. Uterine volumes were also calculated in a series of patients with pregnancy complicated by threatened abortion. The accuracy of the prediction of the outcome of the pregnancy, based solely on uterine volume was 71 percent. Uterine volume measurement is most useful in identifying cases of missed abortion where the period of gestation is known.

  17. Spatial feature analysis of a cosmic-ray sensor for measuring the soil water content: Comparison of four weighting methods

    NASA Astrophysics Data System (ADS)

    Cai, Jingya; Pang, Zhiguo; Fu, Jun'e.

    2018-04-01

    To quantitatively analyze the spatial features of a cosmic-ray sensor (CRS) (i.e., the measurement support volume of the CRS and the weight of the in situ point-scale soil water content (SWC) in terms of the regionally averaged SWC derived from the CRS) in measuring the SWC, cooperative observations based on CRS, oven drying and frequency domain reflectometry (FDR) methods are performed at the point and regional scales in a desert steppe area of the Inner Mongolia Autonomous Region. This region is flat with sparse vegetation cover consisting of only grass, thereby minimizing the effects of terrain and vegetation. Considering the two possibilities of the measurement support volume of the CRS, the results of four weighting methods are compared with the SWC monitored by FDR within an appropriate measurement support volume. The weighted average calculated using the neutron intensity-based weighting method (Ni weighting method) best fits the regionally averaged SWC measured by the CRS. Therefore, we conclude that the gyroscopic support volume and the weights determined by the Ni weighting method are the closest to the actual spatial features of the CRS when measuring the SWC. Based on these findings, a scale transformation model of the SWC from the point scale to the scale of the CRS measurement support volume is established. In addition, the spatial features simulated using the Ni weighting method are visualized by developing a software system.

  18. Application of the control volume mixed finite element method to a triangular discretization

    USGS Publications Warehouse

    Naff, R.L.

    2012-01-01

    A two-dimensional control volume mixed finite element method is applied to the elliptic equation. Discretization of the computational domain is based in triangular elements. Shape functions and test functions are formulated on the basis of an equilateral reference triangle with unit edges. A pressure support based on the linear interpolation of elemental edge pressures is used in this formulation. Comparisons are made between results from the standard mixed finite element method and this control volume mixed finite element method. Published 2011. This article is a US Government work and is in the public domain in the USA. ?? 2012 John Wiley & Sons, Ltd. This article is a US Government work and is in the public domain in the USA.

  19. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    PubMed

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  20. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    PubMed Central

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  1. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center.

    PubMed

    Miao, Beibei; Dou, Chao; Jin, Xuebo

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 .

  2. Quantifying Abdominal Adipose Tissue and Thigh Muscle Volume and Hepatic Proton Density Fat Fraction: Repeatability and Accuracy of an MR Imaging-based, Semiautomated Analysis Method.

    PubMed

    Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B

    2017-05-01

    Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.

  3. GPU-based multi-volume ray casting within VTK for medical applications.

    PubMed

    Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-03-01

    Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.

  4. Novel imaging analysis system to measure the spatial dimension of engineered tissue construct.

    PubMed

    Choi, Kyoung-Hwan; Yoo, Byung-Su; Park, So Ra; Choi, Byung Hyune; Min, Byoung-Hyun

    2010-02-01

    The measurement of the spatial dimensions of tissue-engineered constructs is very important for their clinical applications. In this study, a novel method to measure the volume of tissue-engineered constructs was developed using iterative mathematical computations. The method measures and analyzes three-dimensional (3D) parameters of a construct to estimate its actual volume using a sequence of software-based mathematical algorithms. The mathematical algorithm is composed of two stages: the shape extraction and the determination of volume. The shape extraction utilized 3D images of a construct: length, width, and thickness, captured by a high-quality camera with charge coupled device. The surface of the 3D images was then divided into fine sections. The area of each section was measured and combined to obtain the total surface area. The 3D volume of the target construct was then mathematically obtained using its total surface area and thickness. The accuracy of the measurement method was verified by comparing the results with those obtained from the hydrostatic weighing method (Korea Research Institute of Standards and Science [KRISS], Korea). The mean difference in volume between two methods was 0.0313 +/- 0.0003% (n = 5, P = 0.523) with no significant statistical difference. In conclusion, our image-based spatial measurement system is a reliable and easy method to obtain an accurate 3D volume of a tissue-engineered construct.

  5. A radiographic method to estimate lung volume and its use in small mammals.

    PubMed

    Canals, Mauricio; Olivares, Ricardo; Rosenmann, Mario

    2005-01-01

    In this paper we develop a method to estimate lung volume using chest x-rays of small mammals. We applied this method to assess the lung volume of several rodents. We showed that a good estimator of the lung volume is: V*L = 0.496 x VRX approximately equal to 1/2 x VRX, where VRX is a measurement obtained from the x-ray that represents the volume of a rectangular box containing the lungs and mediastinum organs. The proposed formula may be interpreted as the volume of an ellipsoid formed by both lungs joined at their bases. When that relationship was used to estimate lung volume, values similar to those expected from allometric relationship were found in four rodents. In two others, M. musculus and R. norvegicus, lung volume was similar to reported data, although values were lower than expected.

  6. Verification of aerial photo stand volume tables for southeast Alaska.

    Treesearch

    Theodore S. Setzer; Bert R. Mead

    1988-01-01

    Aerial photo volume tables are used in the multilevel sampling system of Alaska Forest Inventory and Analysis. These volume tables are presented with a description of the data base and methods used to construct the tables. Volume estimates compiled from the aerial photo stand volume tables and associated ground-measured values are compared and evaluated.

  7. Curvature computation in volume-of-fluid method based on point-cloud sampling

    NASA Astrophysics Data System (ADS)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  8. SU-C-BRA-02: Gradient Based Method of Target Delineation On PET/MR Image of Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Chera, B; Falchook, A

    2015-06-15

    Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less

  9. Application of State Quantization-Based Methods in HEP Particle Transport Simulation

    NASA Astrophysics Data System (ADS)

    Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo

    2017-10-01

    Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.

  10. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  12. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  13. Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume

    NASA Astrophysics Data System (ADS)

    Xiao, Mengting; Li, Cheng

    2018-01-01

    Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.

  14. Modeling and predicting tumor response in radioligand therapy.

    PubMed

    Kletting, Peter; Thieme, Anne; Eberhardt, Nina; Rinscheid, Andreas; D'Alessandria, Calogero; Allmann, Jakob; Wester, Hans-Jürgen; Tauber, Robert; Beer, Ambros J; Glatting, Gerhard; Eiber, Matthias

    2018-05-10

    The aim of this work was to develop a theranostic method that allows predicting PSMA-positive tumor volume after radioligand therapy (RLT) based on a pre-therapeutic PET/CT measurement and physiologically based pharmacokinetic/dynamic (PBPK/PD) modeling at the example of RLT using 177 Lu-labeled PSMA for imaging and therapy (PSMA I&T). Methods: A recently developed PBPK model for 177 Lu PSMA I&T RLT was extended to account for tumor (exponential) growth and reduction due to irradiation (linear quadratic model). Data of 13 patients with metastatic castration-resistant prostate cancer (mCRPC) were retrospectively analyzed. Pharmacokinetic/dynamic parameters were simultaneously fitted in a Bayesian framework to PET/CT activity concentrations, planar scintigraphy data and tumor volumes prior and post (6 weeks) therapy. The method was validated using the leave-one-out Jackknife method. The tumor volume post therapy was predicted based on pre-therapy PET/CT imaging and PBPK/PD modeling. Results: The relative deviation of the predicted and measured tumor volume for PSMA-positive tumor cells (6 weeks post therapy) was 1±40% excluding one patient (PSA negative) from the population. The radiosensitivity for the PSA positive patients was determined to be 0.0172±0.0084 Gy-1. Conclusion: The proposed method is the first attempt to solely use PET/CT and modeling methods to predict the PSMA-positive tumor volume after radioligand therapy. Internal validation shows that this is feasible with an acceptable accuracy. Improvement of the method and external validation of the model is ongoing. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Comparison of hand and semiautomatic tracing methods for creating maxillofacial artificial organs using sequences of computed tomography (CT) and cone beam computed tomography (CBCT) images.

    PubMed

    Szabo, Bence T; Aksoy, Seçil; Repassy, Gabor; Csomo, Krisztian; Dobo-Nagy, Csaba; Orhan, Kaan

    2017-06-09

    The aim of this study was to compare the paranasal sinus volumes obtained by manual and semiautomatic imaging software programs using both CT and CBCT imaging. 121 computed tomography (CT) and 119 cone beam computed tomography (CBCT) examinations were selected from the databases of the authors' institutes. The Digital Imaging and Communications in Medicine (DICOM) images were imported into 3-dimensonal imaging software, in which hand mode and semiautomatic tracing methods were used to measure the volumes of both maxillary sinuses and the sphenoid sinus. The determined volumetric means were compared to previously published averages. Isometric CBCT-based volume determination results were closer to the real volume conditions, whereas the non-isometric CT-based volume measurements defined coherently lower volumes. By comparing the 2 volume measurement modes, the values gained from hand mode were closer to the literature data. Furthermore, CBCT-based image measurement results corresponded to the known averages. Our results suggest that CBCT images provide reliable volumetric information that can be depended on for artificial organ construction, and which may aid the guidance of the operator prior to or during the intervention.

  16. Foot volume estimates based on a geometric algorithm in comparison to water displacement.

    PubMed

    Mayrovitz, H N; Sims, N; Litwin, B; Pfister, S

    2005-03-01

    Assessing lower extremity limb volume and its change during and after lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although leg volumes may be determined by tape measure and other methods, there is no metric method to routinely assess foot volumes. Exclusion of foot volumes can under- or overestimate therapeutic progress. Our aim was to develop and test a metric measurement procedure and algorithm for practicing therapists to use to estimate foot volumes. The method uses a caliper and ruler to measure foot dimensions at standardized locations and calculates foot volume (VM) by a mathematical algorithm. VM was compared to volumes measured by water displacement (Vw) in 30 subjects (60 feet) using regression analysis and limits of agreement (LOA). Vw and VM (mean +/- sd) were similar 857 +/- 150 ml vs. 859 +/- 154 ml, and were highly correlated VM = 1.00Vw + 1.67 ml, r = 0.965, p < 0.001. The LOA for absolute volume differences and percentages were respectively +/- 79.6 ml and +/- 9.28 %. These results indicate that this metric method can be a useful alternative to water displacement when foot volumes are needed, but the water displacement method is contraindicated, impractical to implement, too time consuming or is not available.

  17. Multi-views Fusion CNN for Left Ventricular Volumes Estimation on Cardiac MR Images.

    PubMed

    Luo, Gongning; Dong, Suyu; Wang, Kuanquan; Zuo, Wangmeng; Cao, Shaodong; Zhang, Henggui

    2017-10-13

    Left ventricular (LV) volumes estimation is a critical procedure for cardiac disease diagnosis. The objective of this paper is to address direct LV volumes prediction task. In this paper, we propose a direct volumes prediction method based on the end-to-end deep convolutional neural networks (CNN). We study the end-to-end LV volumes prediction method in items of the data preprocessing, networks structure, and multi-views fusion strategy. The main contributions of this paper are the following aspects. First, we propose a new data preprocessing method on cardiac magnetic resonance (CMR). Second, we propose a new networks structure for end-to-end LV volumes estimation. Third, we explore the representational capacity of different slices, and propose a fusion strategy to improve the prediction accuracy. The evaluation results show that the proposed method outperforms other state-of-the-art LV volumes estimation methods on the open accessible benchmark datasets. The clinical indexes derived from the predicted volumes agree well with the ground truth (EDV: R=0.974, RMSE=9.6ml; ESV: R=0.976, RMSE=7.1ml; EF: R=0.828, RMSE =4.71%). Experimental results prove that the proposed method has high accuracy and efficiency on LV volumes prediction task. The proposed method not only has application potential for cardiac diseases screening for large-scale CMR data, but also can be extended to other medical image research fields.

  18. SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Wu, G; Gao, Y

    2016-06-15

    Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less

  19. Model-based flow rate control for an orfice-type low-volume air sampler

    USDA-ARS?s Scientific Manuscript database

    The standard method of measuring air suspended particulate matter concentration per volume of air consists of continuously drawing a defined volume of air across a filter over an extended period of time, then measuring the mass of the filtered particles and dividing it by the total volume sampled ov...

  20. Infusion volume control and calculation using metronome and drop counter based intravenous infusion therapy helper.

    PubMed

    Park, Kyungnam; Lee, Jangyoung; Kim, Soo-Young; Kim, Jinwoo; Kim, Insoo; Choi, Seung Pill; Jeong, Sikyung; Hong, Sungyoup

    2013-06-01

    This study assessed the method of fluid infusion control using an IntraVenous Infusion Controller (IVIC). Four methods of infusion control (dial flow controller, IV set without correction, IV set with correction and IVIC correction) were used to measure the volume of each technique at two infusion rates. The infused fluid volume with a dial flow controller was significantly larger than other methods. The infused fluid volume was significantly smaller with an IV set without correction over time. Regarding the concordance correlation coefficient (CCC) of infused fluid volume in relation to a target volume, IVIC correction was shown to have the highest level of agreement. The flow rate measured in check mode showed a good agreement with the volume of collected fluid after passing through the IV system. Thus, an IVIC could assist in providing an accurate infusion control. © 2013 Wiley Publishing Asia Pty Ltd.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, H.; Fullwood, R.; Glancy, J.

    This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less

  2. Prediction of sonic boom from experimental near-field overpressure data. Volume 2: Data base construction

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Reiners, S. J.; Hague, D. S.

    1975-01-01

    A computerized method for storing, updating and augmenting experimentally determined overpressure signatures has been developed. A data base of pressure signatures for a shuttle type vehicle has been stored. The data base has been used for the prediction of sonic boom with the program described in Volume I.

  3. Modeling dam-break flows using finite volume method on unstructured grid

    USDA-ARS?s Scientific Manuscript database

    Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...

  4. Does Categorization Method Matter in Exploring Volume-Outcome Relation? A Multiple Categorization Methods Comparison in Coronary Artery Bypass Graft Surgery Surgical Site Infection.

    PubMed

    Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao

    2015-08-01

    Volume-infection relation studies have been published for high-risk surgical procedures, although the conclusions remain controversial. Inconsistent results may be caused by inconsistent categorization methods, the definitions of service volume, and different statistical approaches. The purpose of this study was to examine whether a relation exists between provider volume and coronary artery bypass graft (CABG) surgical site infection (SSI) using different categorization methods. A population-based cross-sectional multi-level study was conducted. A total of 10,405 patients who received CABG surgery between 2006 and 2008 in Taiwan were recruited. The outcome of interest was surgical site infection for CABG surgery. The associations among several patient, surgeon, and hospital characteristics was examined. The definition of surgeons' and hospitals' service volume was the cumulative CABG service volumes in the previous year for each CABG operation and categorized by three types of approaches: Continuous, quartile, and k-means clustering. The results of multi-level mixed effects modeling showed that hospital volume had no association with SSI. Although the relation between surgeon volume and surgical site infection was negative, it was inconsistent among the different categorization methods. Categorization of service volume is an important issue in volume-infection study. The findings of the current study suggest that different categorization methods might influence the relation between volume and SSI. The selection of an optimal cutoff point should be taken into account for future research.

  5. Connection method of separated luminal regions of intestine from CT volumes

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Hirooka, Yoshiki; Goto, Hidemi; Mori, Kensaku

    2015-03-01

    This paper proposes a connection method of separated luminal regions of the intestine for Crohn's disease diagnosis. Crohn's disease is an inflammatory disease of the digestive tract. Capsule or conventional endoscopic diagnosis is performed for Crohn's disease diagnosis. However, parts of the intestines may not be observed in the endoscopic diagnosis if intestinal stenosis occurs. Endoscopes cannot pass through the stenosed parts. CT image-based diagnosis is developed as an alternative choice of the Crohn's disease. CT image-based diagnosis enables physicians to observe the entire intestines even if stenosed parts exist. CAD systems for Crohn's disease using CT volumes are recently developed. Such CAD systems need to reconstruct separated luminal regions of the intestines to analyze intestines. We propose a connection method of separated luminal regions of the intestines segmented from CT volumes. The luminal regions of the intestines are segmented from a CT volume. The centerlines of the luminal regions are calculated by using a thinning process. We enumerate all the possible sequences of the centerline segments. In this work, we newly introduce a condition using distance between connected ends points of the centerline segments. This condition eliminates unnatural connections of the centerline segments. Also, this condition reduces processing time. After generating a sequence list of the centerline segments, the correct sequence is obtained by using an evaluation function. We connect the luminal regions based on the correct sequence. Our experiments using four CT volumes showed that our method connected 6.5 out of 8.0 centerline segments per case. Processing times of the proposed method were reduced from the previous method.

  6. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle after arterial switch operation for dextro-transposition of the great arteries.

    PubMed

    Nyns, Emile C A; Dragulescu, Andreea; Yoo, Shi-Joon; Grosse-Wortmann, Lars

    2016-09-01

    Right ventricular (RV) volume and function evaluation is essential in the follow-up of patients after arterial switch operation (ASO) for dextro-transposition of the great arteries (d-TGA). Cardiac magnetic resonance (CMR) imaging using the Simpson's method is the gold-standard for measuring these parameters. However, this method can be challenging and time-consuming, especially in congenital heart disease. Knowledge-based reconstruction (KBR) is an alternative method to derive volumes from CMR datasets. It is based on the identification of a finite number of anatomical RV landmarks in various planes, followed by computer-based reconstruction of the endocardial contours by matching these landmarks with a reference library of representative RV shapes. The purpose of this study was to evaluate the feasibility, accuracy, reproducibility and labor intensity of KBR for RV volumetry in patients after ASO for d-TGA. The CMR datasets of 17 children and adolescents (males 11, median age 15) were studied for RV volumetry using both KBR and Simpson's method. The intraobserver, interobserver and intermethod variabilities were assessed using Bland-Altman analyses. Good correlation between KBR and Simpson's method was noted. Intraobserver and interobserver variability for KBR showed excellent agreement. Volume and function assessment using KBR was faster when compared with the Simpson's method (5.1 ± 0.6 vs. 6.7 ± 0.9 min, p < 0.001). KBR is a feasible, accurate, reproducible and fast method for measuring RV volumes and function derived from CMR in patients after ASO for d-TGA.

  7. Creating compact and microscale features in paper-based devices by laser cutting.

    PubMed

    Mahmud, Md Almostasim; Blondeel, Eric J M; Kaddoura, Moufeed; MacDonald, Brendan D

    2016-11-14

    In this work we describe a fabrication method to create compact and microscale features in paper-based microfluidic devices using a CO 2 laser cutting/engraving machine. Using this method we are able to produce the smallest features with the narrowest barriers yet reported for paper-based microfluidic devices. The method uses foil backed paper as the base material and yields inexpensive paper-based devices capable of using small fluid sample volumes and thus small reagent volumes, which is also suitable for mass production. The laser parameters (power and laser head speed) were adjusted to minimize the width of hydrophobic barriers and we were able to create barriers with a width of 39 ± 15 μm that were capable of preventing cross-barrier bleeding. We generated channels with a width of 128 ± 30 μm, which we found to be the physical limit for small features in the chromatography paper we used. We demonstrate how miniaturizing of paper-based microfluidic devices enables eight tests on a single bioassay device using only 2 μL of sample fluid volume.

  8. Network Aggregation in Transportation Planning : Volume II : A Fixed Point Method for Treating Traffic Equilibria

    DOT National Transportation Integrated Search

    1978-04-01

    Volume 2 defines a new algorithm for the network equilibrium model that works in the space of path flows and is based on the theory of fixed point method. The goals of the study were broadly defined as the identification of aggregation practices and ...

  9. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  10. Automated posterior cranial fossa volumetry by MRI: applications to Chiari malformation type I.

    PubMed

    Bagci, A M; Lee, S H; Nagornaya, N; Green, B A; Alperin, N

    2013-09-01

    Quantification of PCF volume and the degree of PCF crowdedness were found beneficial for differential diagnosis of tonsillar herniation and prediction of surgical outcome in CMI. However, lack of automated methods limits the clinical use of PCF volumetry. An atlas-based method for automated PCF segmentation tailored for CMI is presented. The method performance is assessed in terms of accuracy and spatial overlap with manual segmentation. The degree of association between PCF volumes and the lengths of previously proposed linear landmarks is reported. T1-weighted volumetric MR imaging data with 1-mm isotropic resolution obtained with the use of a 3T scanner from 14 patients with CMI and 3 healthy subjects were used for the study. Manually delineated PCF from 9 patients was used to establish a CMI-specific reference for an atlas-based automated PCF parcellation approach. Agreement between manual and automated segmentation of 5 different CMI datasets was verified by means of the t test. Measurement reproducibility was established through the use of 2 repeated scans from 3 healthy subjects. Degree of linear association between PCF volume and 6 linear landmarks was determined by means of Pearson correlation. PCF volumes measured by use of the automated method and with manual delineation were similar, 196.2 ± 8.7 mL versus 196.9 ± 11.0 mL, respectively. The mean relative difference of -0.3 ± 1.9% was not statistically significant. Low measurement variability, with a mean absolute percentage value of 0.6 ± 0.2%, was achieved. None of the PCF linear landmarks were significantly associated with PCF volume. PCF and tissue content volumes can be reliably measured in patients with CMI by use of an atlas-based automated segmentation method.

  11. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2017-02-01

    Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.

  12. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  13. [Research of bleeding volume and method in blood-letting acupuncture therapy based on data mining].

    PubMed

    Liu, Xin; Jia, Chun-Sheng; Wang, Jian-Ling; Du, Yu-Zhu; Zhang, Xiao-Xu; Shi, Jing; Li, Xiao-Feng; Sun, Yan-Hui; Zhang, Shen; Zhang, Xuan-Ping; Gang, Wei-Juan

    2014-03-01

    Through computer-based technology and data mining method, with treatment in cases of bloodletting acupuncture therapy in collected literature as sample data, the association rule in data mining was applied. According to self-built database platform, the data was input, arranged and summarized, and eventually required data was acquired to perform the data mining of bleeding volume and method in blood-letting acupuncture therapy, which summarized its application rules and clinical values to provide better guide for clinical practice. There were 9 kinds of blood-letting tools in the literature, in which the frequency of three-edge needle was the highest, accounting for 84.4% (1239/1468). The bleeding volume was classified into six levels, in which less volume (less than 0.1 mL) had the highest frequency (401 times). According to the results of the data mining, blood-letting acupuncture therapy was widely applied in clinical practice of acupuncture, in which use of three-edge needle and less volume (less than 0.1 mL) of blood were the most common, however, there was no central tendency in general.

  14. Prediction of subacute infarct size in acute middle cerebral artery stroke: comparison of perfusion-weighted imaging and apparent diffusion coefficient maps.

    PubMed

    Drier, Aurélie; Tourdias, Thomas; Attal, Yohan; Sibon, Igor; Mutlu, Gurkan; Lehéricy, Stéphane; Samson, Yves; Chiras, Jacques; Dormont, Didier; Orgogozo, Jean-Marc; Dousset, Vincent; Rosso, Charlotte

    2012-11-01

    To compare perfusion-weighted (PW) imaging and apparent diffusion coefficient (ADC) maps in prediction of infarct size and growth in patients with acute middle cerebral artery infarct. This study was approved by the local institutional review board. Written informed consent was obtained from all 80 patients. Subsequent infarct volume and growth on follow-up magnetic resonance (MR) images obtained within 6 days were compared with the predictions based on PW images by using a time-to-peak threshold greater than 4 seconds and ADC maps obtained less than 12 hours after middle cerebral artery infarct. ADC- and PW imaging-predicted infarct growth areas and infarct volumes were correlated with subsequent infarct growth and follow-up diffusion-weighted (DW) imaging volumes. The impact of MR imaging time delay on the correlation coefficient between the predicted and subsequent infarct volumes and individual predictions of infarct growth by using receiver operating characteristic curves were assessed. The infarct volume measurements were highly reproducible (concordance correlation coefficient [CCC] of 0.965 and 95% confidence interval [CI]: 0.949, 0.976 for acute DW imaging; CCC of 0.995 and 95% CI: 0.993, 0.997 for subacute DW imaging). The subsequent infarct volume correlated (P<.0001) with ADC- (ρ=0.853) and PW imaging- (ρ=0.669) predicted volumes. The correlation was higher for ADC-predicted volume than for PW imaging-predicted volume (P<.005), but not when the analysis was restricted to patients without recanalization (P=.07). The infarct growth correlated (P<.0001) with PW imaging-DW imaging mismatch (ρ=0.470) and ADC-DW imaging mismatch (ρ=0.438), without significant differences between both methods (P=.71). The correlations were similar among time delays with ADC-predicted volumes but decreased with PW imaging-based volumes beyond the therapeutic window. Accuracies of ADC- and PW imaging-based predictions of infarct growth in an individual prediction were similar (area under the receiver operating characteristic curve [AUC] of 0.698 and 95% CI: 0.585, 0.796 vs AUC of 0.749 and 95% CI: 0.640, 0.839; P=.48). The ADC-based method was as accurate as the PW imaging-based method for evaluating infarct growth and size in the subacute phase. © RSNA, 2012

  15. Lunar Architecture Team - Phase 2 Habitat Volume Estimation: "Caution When Using Analogs"

    NASA Technical Reports Server (NTRS)

    Rudisill, Marianne; Howard, Robert; Griffin, Brand; Green, Jennifer; Toups, Larry; Kennedy, Kriss

    2008-01-01

    The lunar surface habitat will serve as the astronauts' home on the moon, providing a pressurized facility for all crew living functions and serving as the primary location for a number of crew work functions. Adequate volume is required for each of these functions in addition to that devoted to housing the habitat systems and crew consumables. The time constraints of the LAT-2 schedule precluded the Habitation Team from conducting a complete "bottoms-up" design of a lunar surface habitation system from which to derive true volumetric requirements. The objective of this analysis was to quickly derive an estimated total pressurized volume and pressurized net habitable volume per crewmember for a lunar surface habitat, using a principled, methodical approach in the absence of a detailed design. Five "heuristic methods" were used: historical spacecraft volumes, human/spacecraft integration standards and design guidance, Earth-based analogs, parametric "sizing" tools, and conceptual point designs. Estimates for total pressurized volume, total habitable volume, and volume per crewmember were derived using these methods. All method were found to provide some basis for volume estimates, but values were highly variable across a wide range, with no obvious convergence of values. Best current assumptions for required crew volume were provided as a range. Results of these analyses and future work are discussed.

  16. 3D imaging of cement-based materials at submicron resolution by combining laser scanning confocal microscopy with serial sectioning.

    PubMed

    Yio, M H N; Mac, M J; Wong, H S; Buenfeld, N R

    2015-05-01

    In this paper, we present a new method to reconstruct large volumes of nontransparent porous materials at submicron resolution. The proposed method combines fluorescence laser scanning confocal microscopy with serial sectioning to produce a series of overlapping confocal z-stacks, which are then aligned and stitched based on phase correlation. The method can be extended in the XY plane to further increase the overall image volume. Resolution of the reconstructed image volume does not degrade with increase in sample size. We have used the method to image cementitious materials, hardened cement paste and concrete and the results obtained show that the method is reliable. Possible applications of the method such as three-dimensional characterization of the pores and microcracks in hardened concrete, three-dimensional particle shape characterization of cementitious materials and three-dimensional characterization of other porous materials such as rocks and bioceramics are discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  17. Evaluation of Fractional Regional Ventilation Using 4D-CT and Effects of Breathing Maneuvers on Ventilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mistry, Nilesh N., E-mail: nmistry@som.umaryland.edu; Diwanji, Tejan; Shi, Xiutao

    2013-11-15

    Purpose: Current implementations of methods based on Hounsfield units to evaluate regional lung ventilation do not directly incorporate tissue-based mass changes that occur over the respiratory cycle. To overcome this, we developed a 4-dimensional computed tomography (4D-CT)-based technique to evaluate fractional regional ventilation (FRV) that uses an individualized ratio of tidal volume to end-expiratory lung volume for each voxel. We further evaluated the effect of different breathing maneuvers on regional ventilation. The results from this work will help elucidate the relationship between global and regional lung function. Methods and Materials: Eight patients underwent 3 sets of 4D-CT scans during 1more » session using free-breathing, audiovisual guidance, and active breathing control. FRV was estimated using a density-based algorithm with mass correction. Internal validation between global and regional ventilation was performed by use of the imaging data collected during the use of active breathing control. The impact of breathing maneuvers on FRV was evaluated comparing the tidal volume from 3 breathing methods. Results: Internal validation through comparison between the global and regional changes in ventilation revealed a strong linear correlation (slope of 1.01, R{sup 2} of 0.97) between the measured global lung volume and the regional lung volume calculated by use of the “mass corrected” FRV. A linear relationship was established between the tidal volume measured with the automated breathing control system and FRV based on 4D-CT imaging. Consistently larger breathing volumes were observed when coached breathing techniques were used. Conclusions: The technique presented improves density-based evaluation of lung ventilation and establishes a link between global and regional lung ventilation volumes. Furthermore, the results obtained are comparable with those of other techniques of functional evaluation such as spirometry and hyperpolarized-gas magnetic resonance imaging. These results were demonstrated on retrospective analysis of patient data, and further research using prospective data is under way to validate this technique against established clinical tests.« less

  18. Increasing the Accuracy of Volume and ADC Delineation for Heterogeneous Tumor on Diffusion-Weighted MRI: Correlation with PET/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Nan-Jie; Wong, Chun-Sing, E-mail: drcswong@gmail.com; Chu, Yiu-Ching

    2013-10-01

    Purpose: To improve the accuracy of volume and apparent diffusion coefficient (ADC) measurements in diffusion-weighted magnetic resonance imaging (MRI), we proposed a method based on thresholding both the b0 images and the ADC maps. Methods and Materials: In 21 heterogeneous lesions from patients with metastatic gastrointestinal stromal tumors (GIST), gross lesion were manually contoured, and corresponding volumes and ADCs were denoted as gross tumor volume (GTV) and gross ADC (ADC{sub g}), respectively. Using a k-means clustering algorithm, the probable high-cellularity tumor tissues were selected based on b0 images and ADC maps. ADC and volume of the tissues selected using themore » proposed method were denoted as thresholded ADC (ADC{sub thr}) and high-cellularity tumor volume (HCTV), respectively. The metabolic tumor volume (MTV) in positron emission tomography (PET)/computed tomography (CT) was measured using 40% maximum standard uptake value (SUV{sub max}) as the lower threshold, and corresponding mean SUV (SUV{sub mean}) was also measured. Results: HCTV had excellent concordance with MTV according to Pearson's correlation (r=0.984, P<.001) and linear regression (slope = 1.085, intercept = −4.731). In contrast, GTV overestimated the volume and differed significantly from MTV (P=.005). ADC{sub thr} correlated significantly and strongly with SUV{sub mean} (r=−0.807, P<.001) and SUV{sub max} (r=−0.843, P<.001); both were stronger than those of ADC{sub g}. Conclusions: The proposed lesion-adaptive semiautomatic method can help segment high-cellularity tissues that match hypermetabolic tissues in PET/CT and enables more accurate volume and ADC delineation on diffusion-weighted MR images of GIST.« less

  19. An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy

    PubMed Central

    Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.

    2015-01-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937

  20. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.

  1. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  2. Carbon Emissions from Residue Burn Piles Estimated Using LiDAR or Ground Based Measurements of Pile Volumes in a Coastal Douglas-Fir Forest

    NASA Astrophysics Data System (ADS)

    Trofymow, J. A.; Coops, N.; Hayhurst, D.

    2012-12-01

    Following forest harvest, residues left on site and roadsides are often disposed of to reduce fire risk and free planting space. In coastal British Columbia burn piles are the main method of disposal, particularly for accumulations from log processing. Quantification of residue wood in piles is required for: smoke emission estimates, C budget calculations, billable waste assessment, harvest efficiency monitoring, and determination of bioenergy potentials. A second-growth Douglas-fir dominated (DF1949) site on eastern Vancouver Island and subject of C flux and budget studies since 1998, was clearcut in winter 2011, residues piled in spring and burned in fall. Prior to harvest, the site was divided into 4 blocks to account for harvest plans and ecosite conditions. Total harvested wood volume was scaled for each block. Residue pile wood volume was determined by a standard Waste and Residue Survey (WRS) using field estimates of pile base area and plot density (wood volume / 0.005 ha plot) on 2 piles per block, by a smoke emissions geometric method with pile volumes estimated as ellipsoidal paraboloids and packing ratios (wood volume / pile volume) for 2 piles per block, as well as by five other GIS methods using pile volumes and areas from LiDAR and orthophotography flown August 2011, a LiDAR derived digital elevation model (DEM) from 2008, and total scaled wood volumes of 8 sample piles disassembled November 2011. A weak but significant negative relationship was found between pile packing ratio and pile volume. Block level avoidable+unavoidable residue pile wood volumes from the WRS method (20.0 m3 ha-1 SE 2.8) were 30%-50% of the geometric (69.0 m3 ha-1 SE 18.0) or five GIS/LiDAR (48.0 to 65.7 m3 ha-1 ) methods. Block volumes using the 2008 LiDAR DEM (unshifted 48.0 m3 ha-1 SE 3.9, shifted 53.6 m3 ha-1 SE 4.2) to account for pre-existing humps or hollows beneath piles were not different from those using the 2011 LiDAR DEM (50.3 m3 ha-1 SE 4.0). The block volume ratio (total residue pile / harvest scale, wood volumes x 100) for the WRS method (3.3% SE 0.45) was lower than for LiDAR 2011 method (8.1% SE 0.31). Using wood densities from in situ samples and LiDAR 2011 method wood volumes, total residue pile wood biomass in the blocks was 21.5 t dry mass ha-1 (SE 1.9). Post-burn charred residues were ~1.5 t dry mass ha-1 resulting in C emission estimates of 10 t C ha-1 (SE 0.91), assuming 50% C, and equivalent to 2 - 3 years of pre-harvest stand C uptake (NEP 4.8 t C ha-1 y-1 SE 0.58). Results suggest the WRS method may underestimate residue pile wood volumes, while the geometric method may overestimate depending on packing ratio used. While remote sensing methods reduce uncertainty in estimating volumes or areas of all piles in a block, quantification of packing ratios remains a significant source of uncertainty in determining block level residue pile wood volumes. Additional studies are needed for other forest and harvest types to determine the wider applicability of these findings.

  3. Infrasound Waveform Inversion and Mass Flux Validation from Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Yokoo, A.; Izbekov, P. E.; Lopez, T. M.; Prata, F.; Ahonen, P.; Kazahaya, R.; Nakamichi, H.; Iguchi, M.

    2015-12-01

    Recent advances in numerical wave propagation modeling and station coverage have permitted robust inversion of infrasound data from volcanic explosions. Complex topography and crater morphology have been shown to substantially affect the infrasound waveform, suggesting that homogeneous acoustic propagation assumptions are invalid. Infrasound waveform inversion provides an exciting tool to accurately characterize emission volume and mass flux from both volcanic and non-volcanic explosions. Mass flux, arguably the most sought-after parameter from a volcanic eruption, can be determined from the volume flux using infrasound waveform inversion if the volcanic flow is well-characterized. Thus far, infrasound-based volume and mass flux estimates have yet to be validated. In February 2015 we deployed six infrasound stations around the explosive Sakurajima Volcano, Japan for 8 days. Here we present our full waveform inversion method and volume and mass flux estimates of numerous high amplitude explosions using a high resolution DEM and 3-D Finite Difference Time Domain modeling. Application of this technique to volcanic eruptions may produce realistic estimates of mass flux and plume height necessary for volcanic hazard mitigation. Several ground-based instruments and methods are used to independently determine the volume, composition, and mass flux of individual volcanic explosions. Specifically, we use ground-based ash sampling, multispectral infrared imagery, UV spectrometry, and multigas data to estimate the plume composition and flux. Unique tiltmeter data from underground tunnels at Sakurajima also provides a way to estimate the volume and mass of each explosion. In this presentation we compare the volume and mass flux estimates derived from the different methods and discuss sources of error and future improvements.

  4. Automatic segmentation of airway tree based on local intensity filter and machine learning technique in 3D chest CT volume.

    PubMed

    Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku

    2017-02-01

    Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.

  5. Improved CT-based estimate of pulmonary gas trapping accounting for scanner and lung-volume variations in a multicenter asthmatic study.

    PubMed

    Choi, Sanghun; Hoffman, Eric A; Wenzel, Sally E; Castro, Mario; Lin, Ching-Long

    2014-09-15

    Lung air trapping is estimated via quantitative computed tomography (CT) using density threshold-based measures on an expiration scan. However, the effects of scanner differences and imaging protocol adherence on quantitative assessment are known to be problematic. This study investigates the effects of protocol differences, such as using different CT scanners and breath-hold coaches in a multicenter asthmatic study, and proposes new methods that can adjust intersite and intersubject variations. CT images of 50 healthy subjects and 42 nonsevere and 52 severe asthmatics at total lung capacity (TLC) and functional residual capacity (FRC) were acquired using three different scanners and two different coaching methods at three institutions. A fraction threshold-based approach based on the corrected Hounsfield unit of air with tracheal density was applied to quantify air trapping at FRC. The new air-trapping method was enhanced by adding a lung-shaped metric at TLC and the lobar ratio of air-volume change between TLC and FRC. The fraction-based air-trapping method is able to collapse air-trapping data of respective populations into distinct regression lines. Relative to a constant value-based clustering scheme, the slope-based clustering scheme shows the improved performance and reduced misclassification rate of healthy subjects. Furthermore, both lung shape and air-volume change are found to be discriminant variables for differentiating among three populations of healthy subjects and nonsevere and severe asthmatics. In conjunction with the lung shape and air-volume change, the fraction-based measure of air trapping enables differentiation of severe asthmatics from nonsevere asthmatics and nonsevere asthmatics from healthy subjects, critical for the development and evaluation of new therapeutic interventions. Copyright © 2014 the American Physiological Society.

  6. Improved CT-based estimate of pulmonary gas trapping accounting for scanner and lung-volume variations in a multicenter asthmatic study

    PubMed Central

    Choi, Sanghun; Hoffman, Eric A.; Wenzel, Sally E.; Castro, Mario

    2014-01-01

    Lung air trapping is estimated via quantitative computed tomography (CT) using density threshold-based measures on an expiration scan. However, the effects of scanner differences and imaging protocol adherence on quantitative assessment are known to be problematic. This study investigates the effects of protocol differences, such as using different CT scanners and breath-hold coaches in a multicenter asthmatic study, and proposes new methods that can adjust intersite and intersubject variations. CT images of 50 healthy subjects and 42 nonsevere and 52 severe asthmatics at total lung capacity (TLC) and functional residual capacity (FRC) were acquired using three different scanners and two different coaching methods at three institutions. A fraction threshold-based approach based on the corrected Hounsfield unit of air with tracheal density was applied to quantify air trapping at FRC. The new air-trapping method was enhanced by adding a lung-shaped metric at TLC and the lobar ratio of air-volume change between TLC and FRC. The fraction-based air-trapping method is able to collapse air-trapping data of respective populations into distinct regression lines. Relative to a constant value-based clustering scheme, the slope-based clustering scheme shows the improved performance and reduced misclassification rate of healthy subjects. Furthermore, both lung shape and air-volume change are found to be discriminant variables for differentiating among three populations of healthy subjects and nonsevere and severe asthmatics. In conjunction with the lung shape and air-volume change, the fraction-based measure of air trapping enables differentiation of severe asthmatics from nonsevere asthmatics and nonsevere asthmatics from healthy subjects, critical for the development and evaluation of new therapeutic interventions. PMID:25103972

  7. Control-Volume Analysis Of Thrust-Augmenting Ejectors

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1990-01-01

    New method of analysis of transient flow in thrust-augmenting ejector based on control-volume formulation of governing equations. Considered as potential elements of propulsion subsystems of short-takeoff/vertical-landing airplanes.

  8. Correction of partial volume effect in (18)F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter.

    PubMed

    Coello, Christopher; Willoch, Frode; Selnes, Per; Gjerstad, Leif; Fladby, Tormod; Skretting, Arne

    2013-05-15

    A voxel-based algorithm to correct for partial volume effect in PET brain volumes is presented. This method (named LoReAn) is based on MRI based segmentation of anatomical regions and accurate measurements of the effective point spread function of the PET imaging process. The objective is to correct for the spill-out of activity from high-uptake anatomical structures (e.g. grey matter) into low-uptake anatomical structures (e.g. white matter) in order to quantify physiological uptake in the white matter. The new algorithm is presented and validated against the state of the art region-based geometric transfer matrix (GTM) method with synthetic and clinical data. Using synthetic data, both bias and coefficient of variation were improved in the white matter region using LoReAn compared to GTM. An increased number of anatomical regions doesn't affect the bias (<5%) and misregistration affects equally LoReAn and GTM algorithms. The LoReAn algorithm appears to be a simple and promising voxel-based algorithm for studying metabolism in white matter regions. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  10. Accurate airway segmentation based on intensity structure analysis and graph-cut

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku

    2016-03-01

    This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.

  11. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  12. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  13. Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images

    NASA Astrophysics Data System (ADS)

    Oda, Hirohisa; Roth, Holger R.; Bhatia, Kanwal K.; Oda, Masahiro; Kitasaka, Takayuki; Iwano, Shingo; Homma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Schnabel, Julia A.; Mori, Kensaku

    2018-02-01

    We propose a novel mediastinal lymph node detection and segmentation method from chest CT volumes based on fully convolutional networks (FCNs). Most lymph node detection methods are based on filters for blob-like structures, which are not specific for lymph nodes. The 3D U-Net is a recent example of the state-of-the-art 3D FCNs. The 3D U-Net can be trained to learn appearances of lymph nodes in order to output lymph node likelihood maps on input CT volumes. However, it is prone to oversegmentation of each lymph node due to the strong data imbalance between lymph nodes and the remaining part of the CT volumes. To moderate the balance of sizes between the target classes, we train the 3D U-Net using not only lymph node annotations but also other anatomical structures (lungs, airways, aortic arches, and pulmonary arteries) that can be extracted robustly in an automated fashion. We applied the proposed method to 45 cases of contrast-enhanced chest CT volumes. Experimental results showed that 95.5% of lymph nodes were detected with 16.3 false positives per CT volume. The segmentation results showed that the proposed method can prevent oversegmentation, achieving an average Dice score of 52.3 +/- 23.1%, compared to the baseline method with 49.2 +/- 23.8%, respectively.

  14. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  15. From picture to porosity of river bed material using Structure-from-Motion with Multi-View-Stereo

    NASA Astrophysics Data System (ADS)

    Seitz, Lydia; Haas, Christian; Noack, Markus; Wieprecht, Silke

    2018-04-01

    Common methods for in-situ determination of porosity of river bed material are time- and effort-consuming. Although mathematical predictors can be used for estimation, they do not adequately represent porosities. The objective of this study was to assess a new approach for the determination of porosity of frozen sediment samples. The method is based on volume determination by applying Structure-from-Motion with Multi View Stereo (SfM-MVS) to estimate a 3D volumetric model based on overlapping imagery. The method was applied on artificial sediment mixtures as well as field samples. In addition, the commonly used water replacement method was applied to determine porosities in comparison with the SfM-MVS method. We examined a range of porosities from 0.16 to 0.46 that are representative of the wide range of porosities found in rivers. SfM-MVS performed well in determining volumes of the sediment samples. A very good correlation (r = 0.998, p < 0.0001) was observed between the SfM-MVS and the water replacement method. Results further show that the water replacement method underestimated total sample volumes. A comparison with several mathematical predictors showed that for non-uniform samples the calculated porosity based on the standard deviation performed better than porosities based on the median grain size. None of the predictors were effective at estimating the porosity of the field samples.

  16. Left ventricular volume estimation in cardiac three-dimensional ultrasound: a semiautomatic border detection approach.

    PubMed

    van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C

    2005-10-01

    We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.

  17. Extending unbiased stereology of brain ultrastructure to three-dimensional volumes

    NASA Technical Reports Server (NTRS)

    Fiala, J. C.; Harris, K. M.; Koslow, S. H. (Principal Investigator)

    2001-01-01

    OBJECTIVE: Analysis of brain ultrastructure is needed to reveal how neurons communicate with one another via synapses and how disease processes alter this communication. In the past, such analyses have usually been based on single or paired sections obtained by electron microscopy. Reconstruction from multiple serial sections provides a much needed, richer representation of the three-dimensional organization of the brain. This paper introduces a new reconstruction system and new methods for analyzing in three dimensions the location and ultrastructure of neuronal components, such as synapses, which are distributed non-randomly throughout the brain. DESIGN AND MEASUREMENTS: Volumes are reconstructed by defining transformations that align the entire area of adjacent sections. Whole-field alignment requires rotation, translation, skew, scaling, and second-order nonlinear deformations. Such transformations are implemented by a linear combination of bivariate polynomials. Computer software for generating transformations based on user input is described. Stereological techniques for assessing structural distributions in reconstructed volumes are the unbiased bricking, disector, unbiased ratio, and per-length counting techniques. A new general method, the fractional counter, is also described. This unbiased technique relies on the counting of fractions of objects contained in a test volume. A volume of brain tissue from stratum radiatum of hippocampal area CA1 is reconstructed and analyzed for synaptic density to demonstrate and compare the techniques. RESULTS AND CONCLUSIONS: Reconstruction makes practicable volume-oriented analysis of ultrastructure using such techniques as the unbiased bricking and fractional counter methods. These analysis methods are less sensitive to the section-to-section variations in counts and section thickness, factors that contribute to the inaccuracy of other stereological methods. In addition, volume reconstruction facilitates visualization and modeling of structures and analysis of three-dimensional relationships such as synaptic connectivity.

  18. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  19. Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method

    PubMed Central

    Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Guang, E-mail: lig2@mskcc.org; Schmidtlein, C. Ross; Humm, John L.

    Purpose: To assess and account for the impact of respiratory motion on the variability of activity and volume determination of liver tumor in positron emission tomography (PET) through a comparison between free-breathing (FB) and respiration-suspended (RS) PET images. Methods: As part of a PET/computed tomography (CT) guided percutaneous liver ablation procedure performed on a PET/CT scanner, a patient's breathing is suspended on a ventilator, allowing the acquisition of a near-motionless PET and CT reference images of the liver. In this study, baseline RS and FB PET/CT images of 20 patients undergoing thermal ablation were acquired. The RS PET provides near-motionlessmore » reference in a human study, and thereby allows a quantitative evaluation of the effect of respiratory motion on PET images obtained under FB conditions. Two methods were applied to calculate tumor activity and volume: (1) threshold-based segmentation (TBS), estimating the total lesion glycolysis (TLG) and the segmented volume and (2) histogram-based estimation (HBE), yielding the background-subtracted lesion (BSL) activity and associated volume. The TBS method employs 50% of the maximum standardized uptake value (SUV{sub max}) as the threshold for tumors with SUV{sub max} ≥ 2× SUV{sub liver-bkg}, and tumor activity above this threshold yields TLG{sub 50%}. The HBE method determines local PET background based on a Gaussian fit of the low SUV peak in a SUV-volume histogram, which is generated within a user-defined and optimized volume of interest containing both local background and lesion uptakes. Voxels with PET intensity above the fitted background were considered to have originated from the tumor and used to calculate the BSL activity and its associated lesion volume. Results: Respiratory motion caused SUV{sub max} to decrease from RS to FB by −15% ± 11% (p = 0.01). Using TBS method, there was also a decrease in SUV{sub mean} (−18% ± 9%, p = 0.01), but an increase in TLG{sub 50%} (18% ± 36%) and in the segmented volume (47% ± 52%, p = 0.01) from RS to FB PET images. The background uptake in normal liver was stable, 1% ± 9%. In contrast, using the HBE method, the differences in both BSL activity and BSL volume from RS to FB were −8% ± 10% (p = 0.005) and 0% ± 16% (p = 0.94), respectively. Conclusions: This is the first time that almost motion-free PET images of the human liver were acquired and compared to free-breathing PET. The BSL method's results are more consistent, for the calculation of both tumor activity and volume in RS and FB PET images, than those using conventional TBS. This suggests that the BSL method might be less sensitive to motion blurring and provides an improved estimation of tumor activity and volume in the presence of respiratory motion.« less

  1. Weapon System Costing Methodology for Aircraft Airframes and Basic Structures. Volume I. Technical Volume

    DTIC Science & Technology

    1975-06-01

    the Air Force Flight Dynamics Laboratory for use in conceptual and preliminary designs pauses of weapon system development. The methods are a...trade study method provides ai\\ iterative capability stemming from a direct interface with design synthesis programs. A detailed cost data base ;ind...system for data expmjsion is provided. The methods are designed for ease in changing cost estimating relationships and estimating coefficients

  2. A 3D Hermite-based multiscale local active contour method with elliptical shape constraints for segmentation of cardiac MR and CT volumes.

    PubMed

    Barba-J, Leiner; Escalante-Ramírez, Boris; Vallejo Venegas, Enrique; Arámbula Cosío, Fernando

    2018-05-01

    Analysis of cardiac images is a fundamental task to diagnose heart problems. Left ventricle (LV) is one of the most important heart structures used for cardiac evaluation. In this work, we propose a novel 3D hierarchical multiscale segmentation method based on a local active contour (AC) model and the Hermite transform (HT) for LV analysis in cardiac magnetic resonance (MR) and computed tomography (CT) volumes in short axis view. Features such as directional edges, texture, and intensities are analyzed using the multiscale HT space. A local AC model is configured using the HT coefficients and geometrical constraints. The endocardial and epicardial boundaries are used for evaluation. Segmentation of the endocardium is controlled using elliptical shape constraints. The final endocardial shape is used to define the geometrical constraints for segmentation of the epicardium. We follow the assumption that epicardial and endocardial shapes are similar in volumes with short axis view. An initialization scheme based on a fuzzy C-means algorithm and mathematical morphology was designed. The algorithm performance was evaluated using cardiac MR and CT volumes in short axis view demonstrating the feasibility of the proposed method.

  3. Maximum speed limits. Volume 4, An implementation method for setting a speed limit based on the 85th percentile speed

    DOT National Transportation Integrated Search

    1970-10-01

    This volume contains an explanation of a method for setting a speed limit which was developed as a part of the project conducted by the Institute for Research in Public Safety under Contract No. FH-11-7275, "A Study for the Selection of Maximum Speed...

  4. Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus

    PubMed Central

    Giacomini, Guilherme; Pavan, Ana Luiza Menegatti; Altemani, João Mauricio Carrasco; Duarte, Sergio Barbosa; Fortaleza, Carlos Magno Castelo Branco; Miranda, José Ricardo de Arruda

    2018-01-01

    Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions. PMID:29304130

  5. Dispersed and piled woody residues volumes in coastal Douglas-fir cutblocks determined using high-resolution imagery from a UAV and from ground-based surveys.

    NASA Astrophysics Data System (ADS)

    Trofymow, J. A.; Gougeon, F.

    2015-12-01

    After forest harvest significant amounts of woody residues are left dispersed on site and some subsequently piled and burned. Quantification of residues is required for estimating C budgets, billable waste, harvest efficiency, bioenergy potential and smoke emissions. Trofymow (et al 2014 CJFR) compared remote sensing methods to ground-based waste and residue survey (WRS) methods for residue piles in 4 cutblocks in the Oyster River (OR) area in coastal BC. Compared to geospatial methods using 15cm orthophotos and LiDAR acquired in 2011 by helicopter, the WRS method underestimated pile wood by 30% to 50% while a USFS volume method overestimated pile wood by 50% if site specific packing ratios were not used. A geospatial method was developed in PCI Geomatica to analyze 2-bit images of logs >15cm diameters to determine dispersed wood residues in OR and compare to WRS methods. Across blocks, geospatial and WRS method wood volumes were correlated (R2=0.69), however volumes were 2.5 times larger for the geospatial vs WRS method. Methods for dispersed residues could not be properly compared as individual WRS plots were not georeferenced, only 12 plots were sampled in total, and low-resolution images poorly resolved logs. Thus, a new study in 2 cutblocks in the Northwest Bay (NWB) area acquired 2cm resolution RGB air-photography in 2014-15 using an Aeryon Sky Ranger UAV prior to and after burn pile construction. A total of 57 dispersed WRS plots and 24 WRS pile or accumulation plots were georeferenced and measured. Stero-pairs were used to generate point-clouds for pile bulk volumes. Images processed to 8-bit grey scale are being analyzed with a revised PCI method that better accounts for log overlaps. WRS methods depend on a good sample of plots and accurate determination of stratum (dispersed, roadside, piles, accumulations) areas. Analysis of NWB blocks shows WRS field methods for stratum area differ by 5-20% from that determined using orthophotos. Plot-level wood volumes in each plot and stratum determined by geospatial and WRS methods will be compared. While geospatial methods for residue determination is a 100% sample, compared to sample-based WRS method, difficulties in resolving logs in the images may mean the best method for determining residues requires a combination of geospatial and ground plot measurements. .

  6. A Biomechanical Model for Lung Fibrosis in Proton Beam Therapy

    NASA Astrophysics Data System (ADS)

    King, David J. S.

    The physics of protons makes them well-suited to conformal radiotherapy due to the well-known Bragg peak effect. From a proton's inherent stopping power, uncertainty effects can cause a small amount of dose to overflow to an organ at risk (OAR). Previous models for calculating normal tissue complication probabilities (NTCPs) relied on the equivalent uniform dose model (EUD), in which the organ was split into 1/3, 2/3 or whole organ irradiation. However, the problem of dealing with volumes <1/3 of the total volume renders this EUD based approach no longer applicable. In this work the case for an experimental data-based replacement at low volumes is investigated. Lung fibrosis is investigated as an NTCP effect typically arising from dose overflow from tumour irradiation at the spinal base. Considering a 3D geometrical model of the lungs, irradiations are modelled with variable parameters of dose overflow. To calculate NTCPs without the EUD model, experimental data is used from the quantitative analysis of normal tissue effects in the clinic (QUANTEC) data. Additional side projects are also investigated, introduced and explained at various points. A typical radiotherapy course for the patient of 30x2Gy per fraction is simulated. A range of geometry of the target volume and irradiation types is investigated. Investigations with X-rays found the majority of the data point ratios (ratio of EUD values found from calculation based and data based methods) at 20% within unity showing a relatively close agreement. The ratios have not systematically preferred one particular type of predictive method. No Vx metric was found to consistently outperform another. In certain cases there is a good agreement and not in other cases which can be found predicted in the literature. The overall results leads to conclusion that there is no reason to discount the use of the data based predictive method particularly, as a low volume replacement predictive method.

  7. Efficiency and precision for estimating timber and non-timber attributes using Landsat-based stratification methods in two-phase sampling in northwest California

    Treesearch

    Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham

    2002-01-01

    Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...

  8. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    PubMed

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  9. 3D geometric split-merge segmentation of brain MRI datasets.

    PubMed

    Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis

    2014-05-01

    In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Nanoindentation study on the characteristic of shear transformation zone in a Pd-based bulk metallic glass during serrated flow

    NASA Astrophysics Data System (ADS)

    Liao, G. K.; Long, Z. L.; Zhao, M. S. Z.; Peng, L.; Chai, W.; Ping, Z. H.

    2018-04-01

    This paper presents the research on the evolution of shear transformation zone (STZ) in a Pd-based bulk metallic glass (BMG) during serrated flow under nanoindentation. A novel method of estimating the STZ volume through statistical analysis of the serrated flow behavior was proposed for the first time. Based on the proposed method, the STZ volume of the studied BMG at various peak loads have been systematically investigated. The results indicate that the measured STZ volumes are in good agreement with that documented in literature, and the STZ size exhibits an increasing trend during indentation. Moreover, the correlation between the serrated flow dynamics and the STZ activation has also been evaluated. It is found that the STZ activation can promote the formation of self-organized critical (SOC) state during serrated flow.

  11. [Radiotherapy volume delineation based on (18F)-fluorodeoxyglucose positron emission tomography for locally advanced or inoperable oesophageal cancer].

    PubMed

    Encaoua, J; Abgral, R; Leleu, C; El Kabbaj, O; Caradec, P; Bourhis, D; Pradier, O; Schick, U

    2017-06-01

    To study the impact on radiotherapy planning of an automatically segmented target volume delineation based on ( 18 F)-fluorodeoxy-D-glucose (FDG)-hybrid positron emission tomography-computed tomography (PET-CT) compared to a manually delineation based on computed tomography (CT) in oesophageal carcinoma patients. Fifty-eight patients diagnosed with oesophageal cancer between September 2009 and November 2014 were included. The majority had squamous cell carcinoma (84.5 %), and advanced stage (37.9 % were stade IIIA) and 44.8 % had middle oesophageal lesion. Gross tumour volumes were retrospectively defined based either manually on CT or automatically on coregistered PET/CT images using three different threshold methods: standard-uptake value (SUV) of 2.5, 40 % of maximum intensity and signal-to-background ratio. Target volumes were compared in length, volume and using the index of conformality. Radiotherapy plans to the dose of 50Gy and 66Gy using intensity-modulated radiotherapy were generated and compared for both data sets. Planification target volume coverage and doses delivered to organs at risk (heart, lung and spinal cord) were compared. The gross tumour volume based manually on CT was significantly longer than that automatically based on signal-to-background ratio (6.4cm versus 5.3cm; P<0.008). Doses to the lungs (V20, D mean ), heart (V40), and spinal cord (D max ) were significantly lower on plans using the PTV SBR . The PTV SBR coverage was statistically better than the PTV CT coverage on both plans. (50Gy: P<0.0004 and 66Gy: P<0.0006). The automatic PET segmentation algorithm based on the signal-to-background ratio method for the delineation of oesophageal tumours is interesting, and results in better target volume coverage and decreased dose to organs at risk. This may allow dose escalation up to 66Gy to the gross tumour volume. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  12. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  13. Volume calculation of CT lung lesions based on Halton low-discrepancy sequences

    NASA Astrophysics Data System (ADS)

    Li, Shusheng; Wang, Liansheng; Li, Shuo

    2017-03-01

    Volume calculation from the Computed Tomography (CT) lung lesions data is a significant parameter for clinical diagnosis. The volume is widely used to assess the severity of the lung nodules and track its progression, however, the accuracy and efficiency of previous studies are not well achieved for clinical uses. It remains to be a challenging task due to its tight attachment to the lung wall, inhomogeneous background noises and large variations in sizes and shape. In this paper, we employ Halton low-discrepancy sequences to calculate the volume of the lung lesions. The proposed method directly compute the volume without the procedure of three-dimension (3D) model reconstruction and surface triangulation, which significantly improves the efficiency and reduces the complexity. The main steps of the proposed method are: (1) generate a certain number of random points in each slice using Halton low-discrepancy sequences and calculate the lesion area of each slice through the proportion; (2) obtain the volume by integrating the areas in the sagittal direction. In order to evaluate our proposed method, the experiments were conducted on the sufficient data sets with different size of lung lesions. With the uniform distribution of random points, our proposed method achieves more accurate results compared with other methods, which demonstrates the robustness and accuracy for the volume calculation of CT lung lesions. In addition, our proposed method is easy to follow and can be extensively applied to other applications, e.g., volume calculation of liver tumor, atrial wall aneurysm, etc.

  14. Chemical Shift MR Imaging Methods for the Quantification of Transcatheter Lipiodol Delivery to the Liver: Preclinical Feasibility Studies in a Rodent Model

    PubMed Central

    Yin, Xiaoming; Guo, Yang; Li, Weiguo; Huo, Eugene; Zhang, Zhuoli; Nicolai, Jodi; Kleps, Robert A.; Hernando, Diego; Katsaggelos, Aggelos K.; Omary, Reed A.

    2012-01-01

    Purpose: To demonstrate the feasibility of using chemical shift magnetic resonance (MR) imaging fat-water separation methods for quantitative estimation of transcatheter lipiodol delivery to liver tissues. Materials and Methods: Studies were performed in accordance with institutional Animal Care and Use Committee guidelines. Proton nuclear MR spectroscopy was first performed to identify lipiodol spectral peaks and relative amplitudes. Next, phantoms were constructed with increasing lipiodol-water volume fractions. A multiecho chemical shift–based fat-water separation method was used to quantify lipiodol concentration within each phantom. Six rats served as controls; 18 rats underwent catheterization with digital subtraction angiography guidance for intraportal infusion of a 15%, 30%, or 50% by volume lipiodol-saline mixture. MR imaging measurements were used to quantify lipiodol delivery to each rat liver. Lipiodol concentration maps were reconstructed by using both single-peak and multipeak chemical shift models. Intraclass and Spearman correlation coefficients were calculated for statistical comparison of MR imaging–based lipiodol concentration and volume measurements to reference standards (known lipiodol phantom compositions and the infused lipiodol dose during rat studies). Results: Both single-peak and multipeak measurements were well correlated to phantom lipiodol concentrations (r2 > 0.99). Lipiodol volume measurements were progressively and significantly higher when comparing between animals receiving different doses (P < .05 for each comparison). MR imaging–based lipiodol volume measurements strongly correlated with infused dose (intraclass correlation coefficients > 0.93, P < .001) with both single- and multipeak approaches. Conclusion: Chemical shift MR imaging fat-water separation methods can be used for quantitative measurements of lipiodol delivery to liver tissues. © RSNA, 2012 PMID:22623693

  15. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  16. Membranes with artificial free-volume for biofuel production

    PubMed Central

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; Chen, X. Chelsea; Cotanda, Pepa; Hill, Anita J.; Balsara, Nitash P.

    2015-01-01

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. We have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the term artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. We found that the introduction of artificial free-volume improves both alcohol permeability and selectivity. PMID:26104672

  17. Membranes with artificial free-volume for biofuel production

    NASA Astrophysics Data System (ADS)

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; Chen, X. Chelsea; Cotanda, Pepa; Hill, Anita J.; Balsara, Nitash P.

    2015-06-01

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. We have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the term artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. We found that the introduction of artificial free-volume improves both alcohol permeability and selectivity.

  18. Membranes with artificial free-volume for biofuel production

    DOE PAGES

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; ...

    2015-06-24

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. Here, we have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the termmore » artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. Moreover, we found that the introduction of artificial free-volume improves both alcohol permeability and selectivity.« less

  19. Measurement of the volume of the pedicled TRAM flap in immediate breast reconstruction.

    PubMed

    Chang, K P; Lin, S D; Hou, M F; Lee, S S; Tsai, C C

    2001-12-01

    The transverse rectus abdominis musculocutaneous (TRAM) flap is now accepted as the standard for breast reconstruction, but achieving symmetrical breast reconstruction is still a challenge. A precise estimate of the volume of the flap is necessary to reconstruct a symmetrical and aesthetically pleasing breast. Many methods have been developed to overcome this problem, but they have not been suitable for the pedicled TRAM flap. By using a self-made device based on the Archimedes' principle, the authors can calculate accurately the volume of the pedicled TRAM flap and predict reliably the breast volume intraoperatively. The "procedure" is based on a self-made box into which the pedicled TRAM flap is placed. Warm saline is added to the box and the flap is then removed. Flap volume is calculated easily by determining the difference between the preestimated volume of the box and the volume of the residual water. From February to May 2000, this method was used on 28 patients to predict breast volume for breast reconstruction. This study revealed that the difference of the maximal chest circumferences (the index of the breast volume) demonstrates a positive correlation with the difference of the volumes and weights between the mastectomy specimen and the net TRAM flap. However, a more closely positive correlation exists between the differences of maximal chest circumference volume (r = 0.677) than maximal chest circumference weight (r = 0.618). These data reveal that the reconstructed breast's volume has a closer relationship with the volume of the net pedicled TRAM flap, rather than with its weight.

  20. NASA Safety Manual. Volume 3: System Safety

    NASA Technical Reports Server (NTRS)

    1970-01-01

    This Volume 3 of the NASA Safety Manual sets forth the basic elements and techniques for managing a system safety program and the technical methods recommended for use in developing a risk evaluation program that is oriented to the identification of hazards in aerospace hardware systems and the development of residual risk management information for the program manager that is based on the hazards identified. The methods and techniques described in this volume are in consonance with the requirements set forth in NHB 1700.1 (VI), Chapter 3. This volume and future volumes of the NASA Safety Manual shall not be rewritten, reprinted, or reproduced in any manner. Installation implementing procedures, if necessary, shall be inserted as page supplements in accordance with the provisions of Appendix A. No portion of this volume or future volumes of the NASA Safety Manual shall be invoked in contracts.

  1. Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids II: Extension to Two Dimensional Scalar Equation

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.

  2. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  3. Finding the Density of Objects without Measuring Mass and Volume

    ERIC Educational Resources Information Center

    Mumba, Frackson; Tsige, Mesfin

    2007-01-01

    A simple method based on the moment of forces and Archimedes' principle is described for finding density without measuring the mass and volume of an object. The method involves balancing two unknown objects of masses M[subscript 1] and M[subscript 2] on each side of a pivot on a metre rule and measuring their corresponding moment arms. The object…

  4. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, Gerald D.

    1998-01-01

    Microwave injection methods for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant "volume" ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources.

  5. Fully automated atlas-based method for prescribing 3D PRESS MR spectroscopic imaging: Toward robust and reproducible metabolite measurements in human brain.

    PubMed

    Bian, Wei; Li, Yan; Crane, Jason C; Nelson, Sarah J

    2018-02-01

    To implement a fully automated atlas-based method for prescribing 3D PRESS MR spectroscopic imaging (MRSI). The PRESS selected volume and outer-volume suppression bands were predefined on the MNI152 standard template image. The template image was aligned to the subject T 1 -weighted image during a scan, and the resulting transformation was then applied to the predefined prescription. To evaluate the method, H-1 MRSI data were obtained in repeat scan sessions from 20 healthy volunteers. In each session, datasets were acquired twice without repositioning. The overlap ratio of the prescribed volume in the two sessions was calculated and the reproducibility of inter- and intrasession metabolite peak height and area ratios was measured by the coefficient of variation (CoV). The CoVs from intra- and intersession were compared by a paired t-test. The average overlap ratio of the automatically prescribed selection volumes between two sessions was 97.8%. The average voxel-based intersession CoVs were less than 0.124 and 0.163 for peak height and area ratios, respectively. Paired t-test showed no significant difference between the intra- and intersession CoVs. The proposed method provides a time efficient method to prescribe 3D PRESS MRSI with reproducible imaging positioning and metabolite measurements. Magn Reson Med 79:636-642, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng, E-mail: xyang43@emory.edu; Rossi, Peter; Ogunleye, Tomi

    2014-11-01

    Purpose: The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approachmore » that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. Methods: The authors’ approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1–3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS–CT image fusion. After TRUS–CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. Results: For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors’ approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. Conclusions: The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.« less

  7. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy

    PubMed Central

    Yang, Xiaofeng; Rossi, Peter; Ogunleye, Tomi; Marcus, David M.; Jani, Ashesh B.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2014-01-01

    Purpose: The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. Methods: The authors’ approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1–3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS–CT image fusion. After TRUS–CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. Results: For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors’ approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. Conclusions: The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy. PMID:25370648

  8. Defining the optimal method for reporting prostate cancer grade and tumor extent on magnetic resonance/ultrasound fusion-targeted biopsies.

    PubMed

    Gordetsky, Jennifer B; Schultz, Luciana; Porter, Kristin K; Nix, Jeffrey W; Thomas, John V; Del Carmen Rodriguez Pena, Maria; Rais-Bahrami, Soroush

    2018-06-01

    Magnetic resonance (MR)/ultrasound fusion-targeted biopsy (TB) routinely samples multiple cores from each MR lesion of interest. Pathologists can evaluate the extent of cancer involvement and grade using an individual core (IC) or aggregate (AG) method, which could potentially lead to differences in reporting. We reviewed patients who underwent TB followed by radical prostatectomy (RP). TB cores were evaluated for grade and tumor extent by 2 methods. In the IC method, the grade for each TB lesion was based on the core with the highest Gleason score. Tumor extent for each TB was based on the core with the highest percent of tumor involvement. In the AG method, the tumor from all cores within each TB lesion was aggregated to determine the final composite grade and percentage of tumor involvement. Each method was compared with MR lesional volume, MR lesional density (lesion volume/prostate volume), and RP. Fifty-five patients underwent TB followed by RP. Extent of tumor by the AG method showed a better correlation with target lesion volume (r= 0.27,P= .022) and lesional density (r = 0.32, P = .008) than did the IC method (r= 0.19 [P = .103] andr= 0.22 [P = .062]), respectively. Extent of tumor on TB was associated with extraprostatic extension on RP by the AG method (P= .04), but not by the IC method. This association was significantly higher in patients with a grade group (GG) of 3 or higher (P= .03). A change in cancer grade occurred in 3 patients when comparing methods (2 downgraded GG3 to GG2, 1 downgraded GG4 to GG3 by the AG method). For multiple cores obtained via TB, the AG method better correlates with target lesion volume, lesional density, and extraprostatic extension. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Automation of CT-based haemorrhagic stroke assessment for improved clinical outcomes: study protocol and design

    PubMed Central

    Chinda, Betty; Medvedev, George; Siu, William; Ester, Martin; Arab, Ali; Gu, Tao; Moreno, Sylvain; D’Arcy, Ryan C N; Song, Xiaowei

    2018-01-01

    Introduction Haemorrhagic stroke is of significant healthcare concern due to its association with high mortality and lasting impact on the survivors’ quality of life. Treatment decisions and clinical outcomes depend strongly on the size, spread and location of the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while research-based approaches are yet to be tested for clinical utility; there is a demonstrated need for developing effective solutions. The project under review investigates the development of an automatic NCCT-based haematoma computation tool in support of accurate quantification of haematoma volumes. Methods and analysis Several existing research methods for haematoma volume estimation are studied. Selected methods are tested using NCCT images of patients diagnosed with acute haemorrhagic stroke. For inter-rater and intrarater reliability evaluation, different raters will analyse haemorrhage volumes independently. The efficiency with respect to time of haematoma volume assessments will be examined to compare with the results from routine clinical evaluations and planimetry assessment that are known to be more accurate. The project will target the development of an enhanced solution by adapting existing methods and integrating machine learning algorithms. NCCT-based information of brain haemorrhage (eg, size, volume, location) and other relevant information (eg, age, sex, risk factor, comorbidities) will be used in relation to clinical outcomes with future project development. Validity and reliability of the solution will be examined for potential clinical utility. Ethics and dissemination The project including procedures for deidentification of NCCT data has been ethically approved. The study involves secondary use of existing data and does not require new consent of participation. The team consists of clinical neuroimaging scientists, computing scientists and clinical professionals in neurology and neuroradiology and includes patient representatives. Research outputs will be disseminated following knowledge translation plans towards improving stroke patient care. Significant findings will be published in scientific journals. Anticipated deliverables include computer solutions for improved clinical assessment of haematoma using NCCT. PMID:29674371

  10. Tumour functional sphericity from PET images: prognostic value in NSCLC and impact of delineation method.

    PubMed

    Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze

    2018-04-01

    Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.

  11. ON THE BENEFITS AND RISKS OF PROTON THERAPY IN PEDIATRIC CRANIOPHARYNGIOMA

    PubMed Central

    Beltran, Chris; Roca, Monica; Merchant, Thomas E.

    2013-01-01

    Purpose Craniopharyngioma is a pediatric brain tumor whose volume is prone to change during radiation therapy. We compared photon- and proton-based irradiation methods to determine the effect of tumor volume change on target coverage and normal tissue irradiation in these patients. Methods and Materials For this retrospective study, we acquired imaging and treatment-planning data from 14 children with craniopharyngioma (mean age, 5.1 years) irradiated with photons (54 Gy) and monitored by weekly magnetic resonance imaging (MRI) examinations during radiation therapy. Photon intensity-modulated radiation therapy (IMRT), double-scatter proton (DSP) therapy, and intensity-modulated proton therapy (IMPT) plans were created for each patient based on his or her pre-irradiation MRI. Target volumes were contoured on each weekly MRI scan for adaptive modeling. The measured differences in conformity index (CI) and normal tissue doses, including functional sub-volumes of the brain, were compared across the planning methods, as was target coverage based on changes in target volumes during treatment. Results CI and normal tissue dose values of IMPT plans were significantly better than those of the IMRT and DSP plans (p < 0.01). Although IMRT plans had a higher CI and lower optic nerve doses (p < 0.01) than did DSP plans, DSP plans had lower cochlear, optic chiasm, brain, and scanned body doses (p < 0.01). The mean planning target volume (PTV) at baseline was 54.8 cm3, and the mean increase in PTV was 11.3% over the course of treatment. The dose to 95% of the PTV was correlated with a change in the PTV; the R2 values for all models, 0.73 (IMRT), 0.38 (DSP), and 0.62 (IMPT), were significant (p < 0.01). Conclusions Compared with photon IMRT, proton therapy has the potential to significantly reduce whole-brain and -body irradiation in pediatric patients with craniopharyngioma. IMPT is the most conformal method and spares the most normal tissue; however, it is highly sensitive to target volume changes, whereas the DSP method is not. PMID:21570209

  12. Application of parallel distributed Lagrange multiplier technique to simulate coupled Fluid-Granular flows in pipes with varying Cross-Sectional area

    DOE PAGES

    Kanarska, Yuliya; Walton, Otis

    2015-11-30

    Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less

  13. Quantitative measurement for the microstructural parameters of nano-precipitates in Al-Mg-Si-Cu alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kai

    Size, number density and volume fraction of nano-precipitates are important microstructural parameters controlling the strengthening of materials. In this work a widely accessible, convenient, moderately time efficient method with acceptable accuracy and precision has been provided for measurement of volume fraction of nano-precipitates in crystalline materials. The method is based on the traditional but highly accurate technique of measuring foil thickness via convergent beam electron diffraction. A new equation is proposed and verified with the aid of 3-dimensional atom probe (3DAP) analysis, to compensate for the additional error resulted from the hardly distinguishable contrast of too short incomplete precipitates cutmore » by the foil surface. The method can be performed on a regular foil specimen with a modern LaB{sub 6} or field-emission-gun transmission electron microscope. Precisions around ± 16% have been obtained for precipitate volume fractions of needle-like β″/C and Q precipitates in an aged Al-Mg-Si-Cu alloy. The measured number density is close to that directly obtained using 3DAP analysis by a misfit of 4.5%, and the estimated precision for number density measurement is about ± 11%. The limitations of the method are also discussed. - Highlights: •A facile method for measuring volume fraction of nano-precipitates based on CBED •An equation to compensate for small invisible precipitates, with 3DAP verification •Precisions around ± 16% for volume fraction and ± 11% for number density.« less

  14. Decreased frontal white-matter volume in chronic substance abuse.

    PubMed

    Schlaepfer, Thomas E; Lancaster, Eric; Heidbreder, Rebecca; Strain, Eric C; Kosel, Markus; Fisch, Hans-Ulrich; Pearlson, Godfrey D

    2006-04-01

    There is quite a body of work assessing functional brain changes in chronic substance abuse, much less is known about structural brain abnormalities in this patient population. In this study we used magnetic resonance imaging (MRI) to determine if structural brain differences exist in patients abusing illicit drugs compared to healthy controls. Sixteen substance abusers who abused heroin, cocaine and cannabis but not alcohol and 16 age-, sex- and race-matched controls were imaged on a MRI scanner. Contiguous, 5-mm-thick axial slices were acquired with simultaneous T2 and proton density sequences. Volumes were estimated for total grey and white matter, frontal grey and white matter, ventricles, and CSF using two different methods: a conventional segmentation and a stereological method based on the Cavalieri principle. Overall brain volume differences were corrected for by expressing the volumes of interest as a percentage of total brain volume. Volume measures obtained with the two methods were highly correlated (r=0.65, p<0.001). Substance abusers had significantly less frontal white-matter volume percentage than controls. There were no significant differences in any of the other brain volumes measured. This difference in frontal lobe white matter might be explained by a direct neurotoxic effect of drug use on white matter, a pre-existing abnormality in the development of the frontal lobe or a combination of both effects. This last explanation might be compelling based on the fact that newer concepts on shared aspects of some neuropsychiatric disorders focus on the promotion and inhibition of the process of myelination throughout brain development and subsequent degeneration.

  15. An anomaly detection approach for the identification of DME patients using spectral domain optical coherence tomography images.

    PubMed

    Sidibé, Désiré; Sankar, Shrinivasan; Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Cheung, Carol Y; Tan, Gavin S W; Milea, Dan; Lamoureux, Ecosse; Wong, Tien Y; Mériaudeau, Fabrice

    2017-02-01

    This paper proposes a method for automatic classification of spectral domain OCT data for the identification of patients with retinal diseases such as Diabetic Macular Edema (DME). We address this issue as an anomaly detection problem and propose a method that not only allows the classification of the OCT volume, but also allows the identification of the individual diseased B-scans inside the volume. Our approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, the experiments show that the proposed method achieves better classification performance than other recently published works. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Evaluation of an Automatic Registration-Based Algorithm for Direct Measurement of Volume Change in Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing

    2012-07-01

    Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less

  17. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok

    2002-01-01

    Pure water is distilled from waste water in the International Space Station. The distillation assembly consists of an evaporator, a compressor and a condenser. Vapor is periodically purged from the condenser to avoid vapor accumulation. Purged vapor is condensed in a tube by coolant water prior to entering the purge pump. The paper presents a condensation model of purged vapor in a tube. This model is based on the Finite Volume Method. In the Finite Volume Method, the flow domain is discretized into multiple control volumes and a simultaneous analysis is performed.

  18. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  19. Quantification of Protozoa and Viruses from Small Water Volumes

    PubMed Central

    Bonilla, J. Alfredo; Bonilla, Tonya D.; Abdelzaher, Amir M.; Scott, Troy M.; Lukasik, Jerzy; Solo-Gabriele, Helena M.; Palmer, Carol J.

    2015-01-01

    Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The goals of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation—IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels. PMID:26114244

  20. Quantification of Protozoa and Viruses from Small Water Volumes.

    PubMed

    Bonilla, J Alfredo; Bonilla, Tonya D; Abdelzaher, Amir M; Scott, Troy M; Lukasik, Jerzy; Solo-Gabriele, Helena M; Palmer, Carol J

    2015-06-24

    Large sample volumes are traditionally required for the analysis of waterborne pathogens. The need for large volumes greatly limits the number of samples that can be processed. The aims of this study were to compare extraction and detection procedures for quantifying protozoan parasites and viruses from small volumes of marine water. The intent was to evaluate a logistically simpler method of sample collection and processing that would facilitate direct pathogen measures as part of routine monitoring programs. Samples were collected simultaneously using a bilayer device with protozoa capture by size (top filter) and viruses capture by charge (bottom filter). Protozoan detection technologies utilized for recovery of Cryptosporidium spp. and Giardia spp. were qPCR and the more traditional immunomagnetic separation-IFA-microscopy, while virus (poliovirus) detection was based upon qPCR versus plaque assay. Filters were eluted using reagents consistent with the downstream detection technologies. Results showed higher mean recoveries using traditional detection methods over qPCR for Cryptosporidium (91% vs. 45%) and poliovirus (67% vs. 55%) whereas for Giardia the qPCR-based methods were characterized by higher mean recoveries (41% vs. 28%). Overall mean recoveries are considered high for all detection technologies. Results suggest that simultaneous filtration may be suitable for isolating different classes of pathogens from small marine water volumes. More research is needed to evaluate the suitability of this method for detecting pathogens at low ambient concentration levels.

  1. Influence of stapling the intersegmental planes on lung volume and function after segmentectomy.

    PubMed

    Tao, Hiroyuki; Tanaka, Toshiki; Hayashi, Tatsuro; Yoshida, Kumiko; Furukawa, Masashi; Yoshiyama, Koichi; Okabe, Kazunori

    2016-10-01

    Dividing the intersegmental planes with a stapler during pulmonary segmentectomy leads to volume loss in the remnant segment. The aim of this study was to assess the influence of segment division methods on preserved lung volume and pulmonary function after segmentectomy. Using image analysis software on computed tomography (CT) images of 41 patients, the ratio of remnant segment and ipsilateral lung volume to their preoperative values (R-seg and R-ips) was calculated. The ratio of postoperative actual forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) per those predicted values based on three-dimensional volumetry (R-FEV1 and R-FVC) was also calculated. Differences in actual/predicted ratios of lung volume and pulmonary function for each of the division methods were analysed. We also investigated the correlations of the actual/predicted ratio of remnant lung volume with that of postoperative pulmonary function. The intersegmental planes were divided by either electrocautery or with a stapler in 22 patients and with a stapler alone in 19 patients. Mean values of R-seg and R-ips were 82.7 (37.9-140.2) and 104.9 (77.5-129.2)%, respectively. The mean values of R-FEV1 and R-FVC were 103.9 (83.7-135.1) and 103.4 (82.2-125.1)%, respectively. There were no correlations between the actual/predicted ratio of remnant lung volume and pulmonary function based on the division method. Both R-FEV1 and R-FVC were correlated not with R-seg, but with R-ips. Stapling does not lead to less preserved volume or function than electrocautery in the division of the intersegmental planes. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  2. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474

  3. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  4. A voxel-based technique to estimate the volume of trees from terrestrial laser scanner data

    NASA Astrophysics Data System (ADS)

    Bienert, A.; Hess, C.; Maas, H.-G.; von Oheimb, G.

    2014-06-01

    The precise determination of the volume of standing trees is very important for ecological and economical considerations in forestry. If terrestrial laser scanner data are available, a simple approach for volume determination is given by allocating points into a voxel structure and subsequently counting the filled voxels. Generally, this method will overestimate the volume. The paper presents an improved algorithm to estimate the wood volume of trees using a voxel-based method which will correct for the overestimation. After voxel space transformation, each voxel which contains points is reduced to the volume of its surrounding bounding box. In a next step, occluded (inner stem) voxels are identified by a neighbourhood analysis sweeping in the X and Y direction of each filled voxel. Finally, the wood volume of the tree is composed by the sum of the bounding box volumes of the outer voxels and the volume of all occluded inner voxels. Scan data sets from several young Norway maple trees (Acer platanoides) were used to analyse the algorithm. Therefore, the scanned trees as well as their representing point clouds were separated in different components (stem, branches) to make a meaningful comparison. Two reference measurements were performed for validation: A direct wood volume measurement by placing the tree components into a water tank, and a frustum calculation of small trunk segments by measuring the radii along the trunk. Overall, the results show slightly underestimated volumes (-0.3% for a probe of 13 trees) with a RMSE of 11.6% for the individual tree volume calculated with the new approach.

  5. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  6. Emergency department spirometric volume and base deficit delineate risk for torso injury in stable patients

    PubMed Central

    Dunham, C Michael; Sipe, Eilynn K; Peluso, LeeAnn

    2004-01-01

    Background We sought to determine torso injury rates and sensitivities associated with fluid-positive abdominal ultrasound, metabolic acidosis (increased base deficit and lactate), and impaired pulmonary physiology (decreased spirometric volume and PaO2/FiO2). Methods Level I trauma center prospective pilot and post-pilot study (2000–2001) of stable patients. Increased base deficit was < 0.0 in ethanol-negative and ≤ -3.0 in ethanol-positive patients. Increased lactate was > 2.5 mmol/L in ethanol-negative and ≥ 3.0 mmol/L in ethanol-positive patients. Decreased PaO2/FiO2 was < 350 and decreased spirometric volume was < 1.8 L. Results Of 215 patients, 66 (30.7%) had a torso injury (abdominal/pelvic injury n = 35 and/or thoracic injury n = 43). Glasgow Coma Scale score was 14.8 ± 0.5 (13–15). Torso injury rates and sensitivities were: abdominal ultrasound negative and normal base deficit, lactate, PaO2/FiO2, and spirometric volume – 0.0% & 0.0%; normal base deficit and normal spirometric volume – 4.2% & 4.5%; chest/abdominal soft tissue injury – 37.8% & 47.0%; increased lactate – 39.7% & 47.0%; increased base deficit – 41.3% & 75.8%; increased base deficit and/or decreased spirometric volume – 43.8% & 95.5%; decreased PaO2/FiO2 – 48.9% & 33.3%; positive abdominal ultrasound – 62.5% & 7.6%; decreased spirometric volume – 73.4% & 71.2%; increased base deficit and decreased spirometric volume – 82.9% & 51.5%. Conclusions Trauma patients with normal base deficit and spirometric volume are unlikely to have a torso injury. Patients with increased base deficit or lactate, decreased spirometric volume, decreased PaO2/FiO2, or positive FAST have substantial risk for torso injury. Increased base deficit and/or decreased spirometric volume are highly sensitive for torso injury. Base deficit and spirometric volume values are readily available and increase or decrease the suspicion for torso injury. PMID:14731306

  7. Prediction of nanofluids properties: the density and the heat capacity

    NASA Astrophysics Data System (ADS)

    Zhelezny, V. P.; Motovoy, I. V.; Ustyuzhanin, E. E.

    2017-11-01

    The results given in this report show that the additives of Al2O3 nanoparticles lead to increase the density and decrease the heat capacity of isopropanol. Based on the experimental data the excess molar volume and the excess molar heat capacity were calculated. The report suggests new method for predicting the molar volume and molar heat capacity of nanofluids. It is established that the values of the excess thermodynamic functions are determined by the properties and the volume of the structurally oriented layers of the base fluid molecules near the surface of nanoparticles. The heat capacity of the structurally oriented layers of the base fluid is less than the heat capacity of the base fluid for given parameters due to the greater regulation of its structure. It is shown that information on the geometric dimensions of the structured layers of the base fluid near nanoparticles can be obtained from data on the nanofluids density and at ambient temperature - by the dynamic light scattering method. For calculations of the nanofluids heat capacity over a wide range of temperatures a new correlation based on the extended scaling is proposed.

  8. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  9. Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Antong; Deeley, Matthew A.; Niermann, Kenneth J.

    2010-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) is the state of the art technique for head and neck cancer treatment. It requires precise delineation of the target to be treated and structures to be spared, which is currently done manually. The process is a time-consuming task of which the delineation of lymph node regions is often the longest step. Atlas-based delineation has been proposed as an alternative, but, in the authors' experience, this approach is not accurate enough for routine clinical use. Here, the authors improve atlas-based segmentation results obtained for level II-IV lymph node regions using an active shape model (ASM)more » approach. Methods: An average image volume was first created from a set of head and neck patient images with minimally enlarged nodes. The average image volume was then registered using affine, global, and local nonrigid transformations to the other volumes to establish a correspondence between surface points in the atlas and surface points in each of the other volumes. Once the correspondence was established, the ASMs were created for each node level. The models were then used to first constrain the results obtained with an atlas-based approach and then to iteratively refine the solution. Results: The method was evaluated through a leave-one-out experiment. The ASM- and atlas-based segmentations were compared to manual delineations via the Dice similarity coefficient (DSC) for volume overlap and the Euclidean distance between manual and automatic 3D surfaces. The mean DSC value obtained with the ASM-based approach is 10.7% higher than with the atlas-based approach; the mean and median surface errors were decreased by 13.6% and 12.0%, respectively. Conclusions: The ASM approach is effective in reducing segmentation errors in areas of low CT contrast where purely atlas-based methods are challenged. Statistical analysis shows that the improvements brought by this approach are significant.« less

  10. Improved approach to quantitative cardiac volumetrics using automatic thresholding and manual trimming: a cardiovascular MRI study.

    PubMed

    Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark

    2018-01-01

    To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.

  11. Calibration of volume and component biomass equations for Douglas-fir and lodgepole pine in Western Oregon forests

    Treesearch

    Krishna P. Poudel; Temesgen Hailemariam

    2016-01-01

    Using data from destructively sampled Douglas-fir and lodgepole pine trees, we evaluated the performance of regional volume and component biomass equations in terms of bias and RMSE. The volume and component biomass equations were calibrated using three different adjustment methods that used: (a) a correction factor based on ordinary least square regression through...

  12. Dynamic soft tissue deformation estimation based on energy analysis

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Yao, Bin

    2016-10-01

    The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.

  13. Phospholipid component volumes: determination and application to bilayer structure calculations.

    PubMed

    Armen, R S; Uitto, O D; Feller, S E

    1998-08-01

    We present a new method for the determination of bilayer structure based on a combination of computational studies and laboratory experiments. From molecular dynamics simulations, the volumes of submolecular fragments of saturated and unsaturated phosphatidylcholines in the liquid crystalline state have been extracted with a precision not available experimentally. Constancy of component volumes, both among different lipids and as a function of membrane position for a given lipid, have been examined. The component volumes were then incorporated into the liquid crystallographic method described by Wiener and White (1992. Biophys. J. 61:434-447, and references therein) for determining the structure of a fluid-phase dioleoylphosphatidylcholine bilayer from x-ray and neutron diffraction experiments.

  14. Phospholipid component volumes: determination and application to bilayer structure calculations.

    PubMed Central

    Armen, R S; Uitto, O D; Feller, S E

    1998-01-01

    We present a new method for the determination of bilayer structure based on a combination of computational studies and laboratory experiments. From molecular dynamics simulations, the volumes of submolecular fragments of saturated and unsaturated phosphatidylcholines in the liquid crystalline state have been extracted with a precision not available experimentally. Constancy of component volumes, both among different lipids and as a function of membrane position for a given lipid, have been examined. The component volumes were then incorporated into the liquid crystallographic method described by Wiener and White (1992. Biophys. J. 61:434-447, and references therein) for determining the structure of a fluid-phase dioleoylphosphatidylcholine bilayer from x-ray and neutron diffraction experiments. PMID:9675175

  15. Hands-Off and Hands-On Casting Consistency of Amputee below Knee Sockets Using Magnetic Resonance Imaging

    PubMed Central

    Rowe, Philip

    2013-01-01

    Residual limb shape capturing (Casting) consistency has a great influence on the quality of socket fit. Magnetic Resonance Imaging was used to establish a reliable reference grid for intercast and intracast shape and volume consistency of two common casting methods, Hands-off and Hands-on. Residual limbs were cast for twelve people with a unilateral below knee amputation and scanned twice for each casting concept. Subsequently, all four volume images of each amputee were semiautomatically segmented and registered to a common coordinate system using the tibia and then the shape and volume differences were calculated. The results show that both casting methods have intra cast volume consistency and there is no significant volume difference between the two methods. Inter- and intracast mean volume differences were not clinically significant based on the volume of one sock criteria. Neither the Hands-off nor the Hands-on method resulted in a consistent residual limb shape as the coefficient of variation of shape differences was high. The resultant shape of the residual limb in the Hands-off casting was variable but the differences were not clinically significant. For the Hands-on casting, shape differences were equal to the maximum acceptable limit for a poor socket fit. PMID:24348164

  16. Hands-off and hands-on casting consistency of amputee below knee sockets using magnetic resonance imaging.

    PubMed

    Safari, Mohammad Reza; Rowe, Philip; McFadyen, Angus; Buis, Arjan

    2013-01-01

    Residual limb shape capturing (Casting) consistency has a great influence on the quality of socket fit. Magnetic Resonance Imaging was used to establish a reliable reference grid for intercast and intracast shape and volume consistency of two common casting methods, Hands-off and Hands-on. Residual limbs were cast for twelve people with a unilateral below knee amputation and scanned twice for each casting concept. Subsequently, all four volume images of each amputee were semiautomatically segmented and registered to a common coordinate system using the tibia and then the shape and volume differences were calculated. The results show that both casting methods have intra cast volume consistency and there is no significant volume difference between the two methods. Inter- and intracast mean volume differences were not clinically significant based on the volume of one sock criteria. Neither the Hands-off nor the Hands-on method resulted in a consistent residual limb shape as the coefficient of variation of shape differences was high. The resultant shape of the residual limb in the Hands-off casting was variable but the differences were not clinically significant. For the Hands-on casting, shape differences were equal to the maximum acceptable limit for a poor socket fit.

  17. Reliability of the Inverse Water Volumetry Method to Measure the Volume of the Upper Limb.

    PubMed

    Beek, Martinus A; te Slaa, Alexander; van der Laan, Lijckle; Mulder, Paul G H; Rutten, Harm J T; Voogd, Adri C; Luiten, Ernest J T; Gobardhan, Paul D

    2015-06-01

    Lymphedema of the upper extremity is a common side effect of lymph node dissection or irradiation of the axilla. Several techniques are being applied in order to examine the presence and severity of lymphedema. Measurement of circumference of the upper extremity is most frequently performed. An alternative is the water-displacement method. The aim of this study was to determine the reliability and the reproducibility of the "Inverse Water Volumetry apparatus" (IWV-apparatus) for the measurement of arm volumes. The IWV-apparatus is based on the water-displacement method. Measurements were performed by three breast cancer nurse practitioners on ten healthy volunteers in three weekly sessions. The intra-class correlation coefficient, defined as the ratio of the subject component to the total variance, equaled 0.99. The reliability index is calculated as 0.14 kg. This indicates that only changes in a patient's arm volume measurement of more than 0.14 kg would represent a true change in arm volume, which is about 6% of the mean arm volume of 2.3 kg. The IWV-apparatus proved to be a reliable and reproducible method to measure arm volume.

  18. Source fields reconstruction with 3D mapping by means of the virtual acoustic volume concept

    NASA Astrophysics Data System (ADS)

    Forget, S.; Totaro, N.; Guyader, J. L.; Schaeffer, M.

    2016-10-01

    This paper presents the theoretical framework of the virtual acoustic volume concept and two related inverse Patch Transfer Functions (iPTF) identification methods (called u-iPTF and m-iPTF depending on the chosen boundary conditions for the virtual volume). They are based on the application of Green's identity on an arbitrary closed virtual volume defined around the source. The reconstruction of sound source fields combines discrete acoustic measurements performed at accessible positions around the source with the modal behavior of the chosen virtual acoustic volume. The mode shapes of the virtual volume can be computed by a Finite Element solver to handle the geometrical complexity of the source. As a result, it is possible to identify all the acoustic source fields at the real surface of an irregularly shaped structure and irrespective of its acoustic environment. The m-iPTF method is introduced for the first time in this paper. Conversely to the already published u-iPTF method, the m-iPTF method needs only acoustic pressure and avoids particle velocity measurements. This paper is focused on its validation, both with numerical computations and by experiments on a baffled oil pan.

  19. Convergence studies in meshfree peridynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleson, Pablo; Littlewood, David J.

    2016-04-15

    Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less

  20. Classification of SD-OCT volumes for DME detection: an anomaly detection approach

    NASA Astrophysics Data System (ADS)

    Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.

    2016-03-01

    Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.

  1. Diffuse intrinsic pontine glioma: is MRI surveillance improved by region of interest volumetry?

    PubMed

    Riley, Garan T; Armitage, Paul A; Batty, Ruth; Griffiths, Paul D; Lee, Vicki; McMullan, John; Connolly, Daniel J A

    2015-02-01

    Paediatric diffuse intrinsic pontine glioma (DIPG) is noteworthy for its fibrillary infiltration through neuroparenchyma and its resultant irregular shape. Conventional volumetry methods aim to approximate such irregular tumours to a regular ellipse, which could be less accurate when assessing treatment response on surveillance MRI. Region-of-interest (ROI) volumetry methods, using manually traced tumour profiles on contiguous imaging slices and subsequent computer-aided calculations, may prove more reliable. To evaluate whether the reliability of MRI surveillance of DIPGs can be improved by the use of ROI-based volumetry. We investigated the use of ROI- and ellipsoid-based methods of volumetry for paediatric DIPGs in a retrospective review of 22 MRI examinations. We assessed the inter- and intraobserver variability of the two methods when performed by four observers. ROI- and ellipsoid-based methods strongly correlated for all four observers. The ROI-based volumes showed slightly better agreement both between and within observers than the ellipsoid-based volumes (inter-[intra-]observer agreement 89.8% [92.3%] and 83.1% [88.2%], respectively). Bland-Altman plots show tighter limits of agreement for the ROI-based method. Both methods are reproducible and transferrable among observers. ROI-based volumetry appears to perform better with greater intra- and interobserver agreement for complex-shaped DIPG.

  2. A comparison of methods to quantify the in-season training load of professional soccer players.

    PubMed

    Scott, Brendan R; Lockie, Robert G; Knight, Timothy J; Clark, Andrew C; Janse de Jonge, Xanne A K

    2013-03-01

    To compare various measures of training load (TL) derived from physiological (heart rate [HR]), perceptual (rating of perceived exertion [RPE]), and physical (global positioning system [GPS] and accelerometer) data during in-season field-based training for professional soccer. Fifteen professional male soccer players (age 24.9 ± 5.4 y, body mass 77.6 ± 7.5 kg, height 181.1 ± 6.9 cm) were assessed in-season across 97 individual training sessions. Measures of external TL (total distance [TD], the volume of low-speed activity [LSA; <14.4 km/h], high-speed running [HSR; >14.4 km/h], very high-speed running [VHSR; >19.8 km/h], and player load), HR and session-RPE (sRPE) scores were recorded. Internal TL scores (HR-based and sRPE-based) were calculated, and their relationships with measures of external TL were quantified using Pearson product-moment correlations. Physical measures of TD, LSA volume, and player load provided large, significant (r = .71-.84; P < .01) correlations with the HR-based and sRPE-based methods. Volume of HSR and VHSR provided moderate to large, significant (r = .40-.67; P < .01) correlations with measures of internal TL. While the volume of HSR and VHSR provided significant relationships with internal TL, physical-performance measures of TD, LSA volume, and player load appear to be more acceptable indicators of external TL, due to the greater magnitude of their correlations with measures of internal TL.

  3. An integrated algorithm for hypersonic fluid-thermal-structural numerical simulation

    NASA Astrophysics Data System (ADS)

    Li, Jia-Wei; Wang, Jiang-Feng

    2018-05-01

    In this paper, a fluid-structural-thermal integrated method is presented based on finite volume method. A unified integral equations system is developed as the control equations for physical process of aero-heating and structural heat transfer. The whole physical field is discretized by using an up-wind finite volume method. To demonstrate its capability, the numerical simulation of Mach 6.47 flow over stainless steel cylinder shows a good agreement with measured values, and this method dynamically simulates the objective physical processes. Thus, the integrated algorithm proves to be efficient and reliable.

  4. MR diffusion-weighted imaging-based subcutaneous tumour volumetry in a xenografted nude mouse model using 3D Slicer: an accurate and repeatable method

    PubMed Central

    Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2015-01-01

    Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359

  5. Production of large resonant plasma volumes in microwave electron cyclotron resonance ion sources

    DOEpatents

    Alton, G.D.

    1998-11-24

    Microwave injection methods are disclosed for enhancing the performance of existing electron cyclotron resonance (ECR) ion sources. The methods are based on the use of high-power diverse frequency microwaves, including variable-frequency, multiple-discrete-frequency, and broadband microwaves. The methods effect large resonant ``volume`` ECR regions in the ion sources. The creation of these large ECR plasma volumes permits coupling of more microwave power into the plasma, resulting in the heating of a much larger electron population to higher energies, the effect of which is to produce higher charge state distributions and much higher intensities within a particular charge state than possible in present ECR ion sources. 5 figs.

  6. Evaluation of the 95% limits of agreement of the volumes of 5-year clinically stable solid nodules for the development of a follow-up system for indeterminate solid nodules in CT lung cancer screening

    PubMed Central

    Muramatsu, Yukio; Yamamichi, Junta; Gomi, Shiho; Oubel, Estanislao; Moriyama, Noriyuki

    2018-01-01

    Background This study sought to evaluate the 95% limits of agreement of the volumes of 5-year clinically stable solid nodules for the development of a follow-up system for indeterminate solid nodules. Methods The volumes of 226 solid nodules that had been clinically stable for 5 years were measured in 186 patients (53 female never-smokers, 36 male never-smokers, 51 males with <30 pack-years, and 46 males with ≥30 pack-years) using a three-dimensional semiautomated method. Volume changes were evaluated using three methods: percent change, proportional change and growth rate. The 95% limits of agreement were evaluated using the Bland-Altman method. Results The 95% limits of agreement were as follows: range of percent change, from ±34.5% to ±37.8%; range of proportional change, from ±34.1% to ±36.8%; and range of growth rate, from ±39.2% to ±47.4%. Percent change-based, proportional change-based, and growth rate-based diagnoses of an increase or decrease in ten solid nodules were made at a mean of 302±402, 367±455, and 329±496 days, respectively, compared with a clinical diagnosis made at 809±616 days (P<0.05). Conclusions The 95% limits of agreement for volume change in 5-year stable solid nodules may enable the detection of an increase or decrease in the solid nodule at an earlier stage than that enabled by a clinical diagnosis, possibly contributing to the development of a follow-up system for reducing the number of additional Computed tomography (CT) scans performed during the follow-up period. PMID:29600047

  7. The volume and mean depth of Earth's lakes

    NASA Astrophysics Data System (ADS)

    Cael, B. B.; Heathcote, A. J.; Seekell, D. A.

    2017-01-01

    Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.

  8. Estimation of Error in Maximal Intensity Projection-Based Internal Target Volume of Lung Tumors: A Simulation and Comparison Study Using Dynamic Magnetic Resonance Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Jing; Read, Paul W.; Baisden, Joseph M.

    Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less

  9. Validation of ultrasonography of the thyroid gland for epidemiological purposes.

    PubMed

    Knudsen, N; Bols, B; Bülow, I; Jørgensen, T; Perrild, H; Ovesen, L; Laurberg, P

    1999-11-01

    Ultrasonography of the thyroid is often used in epidemiological surveys, thus thorough characterization of the interobserver variation of the different parameters obtained is important. Various methods have been used for measuring thyroid volume, and different formulas have been used for calculation of thyroid volume from the measured dimensions. In this article, two principles of thyroid volume measurement are described in detail: the wellknown method based on the three axes of each lobe and a new principle based on planimetry in two planes. The interobserver variation of the examination and the measuring procedure in itself were tested on 25 participants in a population study. A comparison of postmortem ultrasonography of the thyroid and results of an autopsy was performed. Good correlation and agreement between observers was found for thyroid volume (r = 0.98) and prevalence of thyroid nodules (kappa = 0.72), whereas echogenecity and echopattern showed little agreement. The correlation of thyroid volume by ultrasonography to autopsy results was satisfactory (r = 0.93), but the volume tended to be slightly underestimated even when using the formula pi/6(= 0.52)*length*width*depth. No major differences were found between the performance of the two principles of volume calculation. We conclude that when the measuring procedure is well defined, results of ultrasonography are comparable between observers for thyroid volume and prevalence of thyroid nodules, but not for echogenecity or echopattern. The formula of length*depth*width*pi/6 is suitable for thyroid volume measurement.

  10. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  11. Estimation on the First Cycle of the Annual Forest Inventory System: Methods, Preliminary Results, and Observations

    Treesearch

    Mark H. Hansen; Gary J. Brand; Daniel G. Wendt; Ronald E. McRoberts

    2001-01-01

    The first year of annual FIA data collection in the North Central region was completed for 1999 in Indiana, Iowa, Minnesota, and Missouri. Estimates of timberland area, total growing-stock volume and growing-stock volume per acre are presented. These estimates are based on data from 1 year, collected at the base Federal inventory intensity, a lower intensity sample...

  12. TU-AB-BRA-11: Evaluation of Fully Automatic Volumetric GBM Segmentation in the TCGA-GBM Dataset: Prognosis and Correlation with VASARI Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios Velazquez, E; Meier, R; Dunn, W

    Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showedmore » high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.« less

  13. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    PubMed Central

    Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960

  14. Mapping debris-flow hazard in Honolulu using a DEM

    USGS Publications Warehouse

    Ellen, Stephen D.; Mark, Robert K.; ,

    1993-01-01

    A method for mapping hazard posed by debris flows has been developed and applied to an area near Honolulu, Hawaii. The method uses studies of past debris flows to characterize sites of initiation, volume at initiation, and volume-change behavior during flow. Digital simulations of debris flows based on these characteristics are then routed through a digital elevation model (DEM) to estimate degree of hazard over the area.

  15. 40 CFR 63.6620 - What performance tests and other procedures must I use?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... based on the ratio of oxygen volume to the ultimate CO2 volume produced by the fuel at zero percent... volume of CO2 produced to the gross calorific value of the fuel from Method 19, dsm3/J (dscf/106 Btu... equivalent percent carbon dioxide (CO2). If pollutant concentrations are to be corrected to 15 percent oxygen...

  16. 40 CFR 63.6620 - What performance tests and other procedures must I use?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... based on the ratio of oxygen volume to the ultimate CO2 volume produced by the fuel at zero percent... volume of CO2 produced to the gross calorific value of the fuel from Method 19, dsm3/J (dscf/106 Btu... equivalent percent carbon dioxide (CO2). If pollutant concentrations are to be corrected to 15 percent oxygen...

  17. Comparative assessment of GIS-based methods and metrics for estimating long-term exposures to air pollution

    NASA Astrophysics Data System (ADS)

    Gulliver, John; de Hoogh, Kees; Fecht, Daniela; Vienneau, Danielle; Briggs, David

    2011-12-01

    The development of geographical information system techniques has opened up a wide array of methods for air pollution exposure assessment. The extent to which these provide reliable estimates of air pollution concentrations is nevertheless not clearly established. Nor is it clear which methods or metrics should be preferred in epidemiological studies. This paper compares the performance of ten different methods and metrics in terms of their ability to predict mean annual PM 10 concentrations across 52 monitoring sites in London, UK. Metrics analysed include indicators (distance to nearest road, traffic volume on nearest road, heavy duty vehicle (HDV) volume on nearest road, road density within 150 m, traffic volume within 150 m and HDV volume within 150 m) and four modelling approaches: based on the nearest monitoring site, kriging, dispersion modelling and land use regression (LUR). Measures were computed in a GIS, and resulting metrics calibrated and validated against monitoring data using a form of grouped jack-knife analysis. The results show that PM 10 concentrations across London show little spatial variation. As a consequence, most methods can predict the average without serious bias. Few of the approaches, however, show good correlations with monitored PM 10 concentrations, and most predict no better than a simple classification based on site type. Only land use regression reaches acceptable levels of correlation ( R2 = 0.47), though this can be improved by also including information on site type. This might therefore be taken as a recommended approach in many studies, though care is needed in developing meaningful land use regression models, and like any method they need to be validated against local data before their application as part of epidemiological studies.

  18. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  19. Clearance detector and method for motion and distance

    DOEpatents

    Xavier, Patrick G [Albuquerque, NM

    2011-08-09

    A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.

  20. Numerical solutions of the macroscopic Maxwell equations for scattering by non-spherical particles: A tutorial review

    NASA Astrophysics Data System (ADS)

    Kahnert, Michael

    2016-07-01

    Numerical solution methods for electromagnetic scattering by non-spherical particles comprise a variety of different techniques, which can be traced back to different assumptions and solution strategies applied to the macroscopic Maxwell equations. One can distinguish between time- and frequency-domain methods; further, one can divide numerical techniques into finite-difference methods (which are based on approximating the differential operators), separation-of-variables methods (which are based on expanding the solution in a complete set of functions, thus approximating the fields), and volume integral-equation methods (which are usually solved by discretisation of the target volume and invoking the long-wave approximation in each volume cell). While existing reviews of the topic often tend to have a target audience of program developers and expert users, this tutorial review is intended to accommodate the needs of practitioners as well as novices to the field. The required conciseness is achieved by limiting the presentation to a selection of illustrative methods, and by omitting many technical details that are not essential at a first exposure to the subject. On the other hand, the theoretical basis of numerical methods is explained with little compromises in mathematical rigour; the rationale is that a good grasp of numerical light scattering methods is best achieved by understanding their foundation in Maxwell's theory.

  1. Converging evidence for abnormalities of the prefrontal cortex and evaluation of midsagittal structures in pediatric PTSD: an MRI study

    PubMed Central

    Carrion, Victor G.; Weems, Carl F.; Watson, Christa; Eliez, Stephan; Menon, Vinod; Reiss, Allan L.

    2009-01-01

    Objective Volumetric imaging research has shown abnormal brain morphology in posttraumatic stress disorder (PTSD) when compared to controls. We present results on a study of brain morphology in the prefrontal cortex (PFC) and midline structures, via indices of gray matter volume and density, in pediatric PTSD. We hypothesized that both methods would demonstrate aberrant morphology in the PFC. Further, we hypothesized aberrant brainstem anatomy and reduced corpus collosum volume in children with PTSD. Methods Twenty-four children (aged 7-14) with history of interpersonal trauma and 24 age, and gender matched controls underwent structural magnetic resonance imaging. Images of the PFC and midline brain structures were first analyzed using volumetric image analysis. The PFC data were then compared with whole-brain voxel-based techniques using statistical parametric mapping (SPM). Results The PTSD group showed significant increased gray matter volume in the right and left inferior and superior quadrants of the prefrontal cortex and smaller gray matter volume in pons, and posterior vermis areas by volumetric image analysis. The voxel-byvoxel group comparisons demonstrated increased gray matter density mostly localized to ventral PFC as compared to the control group. Conclusions Abnormal frontal lobe morphology, as revealed by separate-complementary image analysis methods, and reduced pons and posterior vermis areas are associated with pediatric PTSD. Voxel-based morphometry may help to corroborate and further localize data obtained by volume of interest methods in PTSD. PMID:19349151

  2. Factors Affecting Prostate Volume Estimation in Computed Tomography Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Cheng-Hsiu; Wang, Shyh-Jen; Institute of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan

    2011-04-01

    The aim of this study was to investigate how apex-localizing methods and the computed tomography (CT) slice thickness affected the CT-based prostate volume estimation. Twenty-eight volunteers underwent evaluations of prostate volume by CT, where the contour segmentations were performed by three observers. The bottom of ischial tuberosities (ITs) and the bulb of the penis were used as reference positions to locate the apex, and the distances to the apex were recorded as 1.3 and 2.0 cm, respectively. Interobserver variations to locate ITs and the bulb of the penis were, on average, 0.10 cm (range 0.03-0.38 cm) and 0.30 cm (rangemore » 0.00-0.98 cm), respectively. The range of CT slice thickness varied from 0.08-0.48 cm and was adopted to examine the influence of the variation on volume estimation. The volume deviation from the reference case (0.08 cm), which increases in tandem with the slice thickness, was within {+-} 3 cm{sup 3}, regardless of the adopted apex-locating reference positions. In addition, the maximum error of apex identification was 1.5 times of slice thickness. Finally, based on the precise CT films and the methods of apex identification, there were strong positive correlation coefficients for the estimated prostate volume by CT and the transabdominal ultrasonography, as found in the present study (r > 0.87; p < 0.0001), and this was confirmed by Bland-Altman analysis. These results will help to identify factors that affect prostate volume calculation and to contribute to the improved estimation of the prostate volume based on CT images.« less

  3. Fusing literature and full network data improves disease similarity computation.

    PubMed

    Li, Ping; Nie, Yaling; Yu, Jingkai

    2016-08-30

    Identifying relatedness among diseases could help deepen understanding for the underlying pathogenic mechanisms of diseases, and facilitate drug repositioning projects. A number of methods for computing disease similarity had been developed; however, none of them were designed to utilize information of the entire protein interaction network, using instead only those interactions involving disease causing genes. Most of previously published methods required gene-disease association data, unfortunately, many diseases still have very few or no associated genes, which impeded broad adoption of those methods. In this study, we propose a new method (MedNetSim) for computing disease similarity by integrating medical literature and protein interaction network. MedNetSim consists of a network-based method (NetSim), which employs the entire protein interaction network, and a MEDLINE-based method (MedSim), which computes disease similarity by mining the biomedical literature. Among function-based methods, NetSim achieved the best performance. Its average AUC (area under the receiver operating characteristic curve) reached 95.2 %. MedSim, whose performance was even comparable to some function-based methods, acquired the highest average AUC in all semantic-based methods. Integration of MedSim and NetSim (MedNetSim) further improved the average AUC to 96.4 %. We further studied the effectiveness of different data sources. It was found that quality of protein interaction data was more important than its volume. On the contrary, higher volume of gene-disease association data was more beneficial, even with a lower reliability. Utilizing higher volume of disease-related gene data further improved the average AUC of MedNetSim and NetSim to 97.5 % and 96.7 %, respectively. Integrating biomedical literature and protein interaction network can be an effective way to compute disease similarity. Lacking sufficient disease-related gene data, literature-based methods such as MedSim can be a great addition to function-based algorithms. It may be beneficial to steer more resources torward studying gene-disease associations and improving the quality of protein interaction data. Disease similarities can be computed using the proposed methods at http:// www.digintelli.com:8000/ .

  4. Automated segmentation of serous pigment epithelium detachment in SD-OCT images

    NASA Astrophysics Data System (ADS)

    Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2015-03-01

    Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.

  5. Cost and price estimate of Brayton and Stirling engines in selected production volumes

    NASA Technical Reports Server (NTRS)

    Fortgang, H. R.; Mayers, H. F.

    1980-01-01

    The methods used to determine the production costs and required selling price of Brayton and Stirling engines modified for use in solar power conversion units are presented. Each engine part, component and assembly was examined and evaluated to determine the costs of its material and the method of manufacture based on specific annual production volumes. Cost estimates are presented for both the Stirling and Brayton engines in annual production volumes of 1,000, 25,000, 100,000 and 400,000. At annual production volumes above 50,000 units, the costs of both engines are similar, although the Stirling engine costs are somewhat lower. It is concluded that modifications to both the Brayton and Stirling engine designs could reduce the estimated costs.

  6. Airway extraction from 3D chest CT volumes based on iterative extension of VOI enhanced by cavity enhancement filter

    NASA Astrophysics Data System (ADS)

    Meng, Qier; Kitasaka, Takayuki; Oda, Masahiro; Mori, Kensaku

    2017-03-01

    Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining an integrated 3-D airway tree structure from a CT volume is a quite challenging task. This paper presents a novel airway segmentation method based on intensity structure analysis and bronchi shape structure analysis in volume of interest (VOI). This method segments the bronchial regions by applying the cavity enhancement filter (CEF) to trace the bronchial tree structure from the trachea. It uses the CEF in each VOI to segment each branch and to predict the positions of VOIs which envelope the bronchial regions in next level. At the same time, a leakage detection is performed to avoid the leakage by analysing the pixel information and the shape information of airway candidate regions extracted in the VOI. Bronchial regions are finally obtained by unifying the extracted airway regions. The experiments results showed that the proposed method can extract most of the bronchial region in each VOI and led good results of the airway segmentation.

  7. Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.

    PubMed

    Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen

    2008-02-01

    A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.

  8. Registration of 3D fetal neurosonography and MRI☆

    PubMed Central

    Kuklisova-Murgasova, Maria; Cifor, Amalia; Napolitano, Raffaele; Papageorghiou, Aris; Quaghebeur, Gerardine; Rutherford, Mary A.; Hajnal, Joseph V.; Noble, J. Alison; Schnabel, Julia A.

    2013-01-01

    We propose a method for registration of 3D fetal brain ultrasound with a reconstructed magnetic resonance fetal brain volume. This method, for the first time, allows the alignment of models of the fetal brain built from magnetic resonance images with 3D fetal brain ultrasound, opening possibilities to develop new, prior information based image analysis methods for 3D fetal neurosonography. The reconstructed magnetic resonance volume is first segmented using a probabilistic atlas and a pseudo ultrasound image volume is simulated from the segmentation. This pseudo ultrasound image is then affinely aligned with clinical ultrasound fetal brain volumes using a robust block-matching approach that can deal with intensity artefacts and missing features in the ultrasound images. A qualitative and quantitative evaluation demonstrates good performance of the method for our application, in comparison with other tested approaches. The intensity average of 27 ultrasound images co-aligned with the pseudo ultrasound template shows good correlation with anatomy of the fetal brain as seen in the reconstructed magnetic resonance image. PMID:23969169

  9. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  10. A laboratory method for precisely determining the micro-volume-magnitudes of liquid efflux

    NASA Technical Reports Server (NTRS)

    Cloutier, R. L.

    1969-01-01

    Micro-volumetric quantities of ejected liquid are made to produce equal volumetric displacements of a more dense material. Weight measurements are obtained on the displaced heavier liquid and used to calculate volumes based upon the known density of the heavy medium.

  11. Nonlocal Intracranial Cavity Extraction

    PubMed Central

    Manjón, José V.; Eskildsen, Simon F.; Coupé, Pierrick; Romero, José E.; Collins, D. Louis; Robles, Montserrat

    2014-01-01

    Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden. PMID:25328511

  12. Micro CT based truth estimation of nodule volume

    NASA Astrophysics Data System (ADS)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  13. [Compatible biomass models of natural spruce (Picea asperata)].

    PubMed

    Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu

    2017-10-01

    By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.

  14. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  15. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  16. IMP: Interactive mass properties program. Volume 1: Program description

    NASA Technical Reports Server (NTRS)

    Stewart, W. A.

    1976-01-01

    A method of computing a weights and center of gravity analysis of a flight vehicle using interactive graphical capabilities of the Adage 340 computer is described. The equations used to calculate area, volume, and mass properties are based on elemental surface characteristics. The input/output methods employ the graphic support of the Adage computer. Several interactive program options are available for analyzing the mass properties of a vehicle. These options are explained.

  17. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  18. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    PubMed

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  19. Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.

    PubMed

    Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni

    2012-01-01

    Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.

  20. Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.

    PubMed

    Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y

    2017-05-01

    Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A characteristic based volume penalization method for general evolution problems applied to compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.

    2014-04-01

    In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.

  2. The cost of cancer registry operations: Impact of volume on cost per case for core and enhanced registry activities

    PubMed Central

    Subramanian, Sujha; Tangka, Florence K.L.; Beebe, Maggie Cole; Trebino, Diana; Weir, Hannah K.; Babcock, Frances

    2016-01-01

    Background Cancer registration data is vital for creating evidence-based policies and interventions. Quantifying the resources needed for cancer registration activities and identifying potential efficiencies are critically important to ensure sustainability of cancer registry operations. Methods Using a previously validated web-based cost assessment tool, we collected activity-based cost data and report findings using 3 years of data from 40 National Program of Cancer Registry grantees. We stratified registries by volume: low-volume included fewer than 10,000 cases, medium-volume included 10,000–50,000 cases, and high-volume included >50,000 cases. Results Low-volume cancer registries incurred an average of $93.11 to report a case (without in-kind contributions) compared with $27.70 incurred by high-volume registries. Across all registries, the highest cost per case was incurred for data collection and abstraction ($8.33), management ($6.86), and administration ($4.99). Low- and medium-volume registries have higher costs than high-volume registries for all key activities. Conclusions Some cost differences by volume can be explained by the large fixed costs required for administering and performing registration activities, but other reasons may include the quality of the data initially submitted to the registries from reporting sources such as hospitals and pathology laboratories. Automation or efficiency improvements in data collection can potentially reduce overall costs. PMID:26702880

  3. Scan-based volume animation driven by locally adaptive articulated registrations.

    PubMed

    Rhee, Taehyun; Lewis, J P; Neumann, Ulrich; Nayak, Krishna S

    2011-03-01

    This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries. © 2011 IEEE

  4. Analysis and forecast of railway coal transportation volume based on BP neural network combined forecasting model

    NASA Astrophysics Data System (ADS)

    Xu, Yongbin; Xie, Haihong; Wu, Liuyi

    2018-05-01

    The share of coal transportation in the total railway freight volume is about 50%. As is widely acknowledged, coal industry is vulnerable to the economic situation and national policies. Coal transportation volume fluctuates significantly under the new economic normal. Grasp the overall development trend of railway coal transportation market, have important reference and guidance significance to the railway and coal industry decision-making. By analyzing the economic indicators and policy implications, this paper expounds the trend of the coal transportation volume, and further combines the economic indicators with the high correlation with the coal transportation volume with the traditional traffic prediction model to establish a combined forecasting model based on the back propagation neural network. The error of the prediction results is tested, which proves that the method has higher accuracy and has practical application.

  5. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    NASA Astrophysics Data System (ADS)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  6. Ischemic lesion volume determination on diffusion weighted images vs. apparent diffusion coefficient maps.

    PubMed

    Bråtane, Bernt Tore; Bastan, Birgul; Fisher, Marc; Bouley, James; Henninger, Nils

    2009-07-07

    Though diffusion weighted imaging (DWI) is frequently used for identifying the ischemic lesion in focal cerebral ischemia, the understanding of spatiotemporal evolution patterns observed with different analysis methods remains imprecise. DWI and calculated apparent diffusion coefficient (ADC) maps were serially obtained in rat stroke models (MCAO): permanent, 90 min, and 180 min temporary MCAO. Lesion volumes were analyzed in a blinded and randomized manner by 2 investigators using (i) a previously validated ADC threshold, (ii) visual determination of hypointense regions on ADC maps, and (iii) visual determination of hyperintense regions on DWI. Lesion volumes were correlated with 24 hour 2,3,5-triphenyltetrazoliumchloride (TTC)-derived infarct volumes. TTC-derived infarct volumes were not significantly different from the ADC and DWI-derived lesion volumes at the last imaging time points except for significantly smaller DWI lesions in the pMCAO model (p=0.02). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived lesions on ADC maps than DWI (p<0.05). Following reperfusion, lesion volumes on the ADC maps significantly reduced but no change was observed on DWI. Visually determined lesion volumes on ADC maps and DWI by both investigators correlated significantly with threshold-derived lesion volumes on ADC maps with the former method demonstrating a stronger correlation. There was also a better interrater agreement for ADC map analysis than for DWI analysis. Ischemic lesion determination by ADC was more accurate in final infarct prediction, rater independent, and provided exclusive information on ischemic lesion reversibility.

  7. Turbulent forced convection of nanofluids downstream an abrupt expansion

    NASA Astrophysics Data System (ADS)

    Kimouche, Abdelali; Mataoui, Amina

    2018-03-01

    Turbulent forced convection of Nanofluids through an axisymmetric abrupt expansion is investigated numerically in the present study. The governing equations are solved by ANYS 14.0 CFD code based on the finite volume method by implementing the thermo-physical properties of each nanofluid. All results are analyzed through the evolutions of skin friction coefficient and Nusselt number. For each nanofluid, the effect of both volume fraction and Reynolds number on this type of flow configuration, are examined. An increase on average Nusselt number with the volume fraction and Reynolds number, are highlighted and correlated. Two relationships are proposed. The first one, determines the average Nusselt number versus Reynolds number, volume fraction and the ratio of densities of the solid particles to that of the base fluid ( \\overline{Nu}=f(\\operatorname{Re},φ, ρ_s/ρ_f) ). The second one varies according Reynolds number, volume fraction and the conductivities ratio of solid particle to that of the base fluid ( \\overline{Nu}=f(\\operatorname{Re},φ, k_s/k_f) ).

  8. Exploring Dutch surgeons' views on volume-based policies: a qualitative interview study.

    PubMed

    Mesman, Roos; Faber, Marjan J; Westert, Gert P; Berden, Bart

    2018-01-01

    Objective In many countries, the evidence for volume-outcome associations in surgery has been transferred into policy. Despite the large body of research that exists on the topic, qualitative studies aimed at surgeons' views on, and experiences with, these volume-based policies are lacking. We interviewed Dutch surgeons to gain more insight into the implications of volume-outcome policies for daily clinical practice, as input for effective surgical quality improvement. Methods Semi-structured interviews were conducted with 20 purposively selected surgeons from a stratified sample for hospital type and speciality. The interviews were recorded, transcribed verbatim and underwent inductive content analysis. Results Two overarching themes were inductively derived from the data: (1) minimum volume standards and (2) implications of volume-based policies. Although surgeons acknowledged the premise 'more is better', they were critical about the validity and underlying evidence for minimum volume standards. Patients often inquire about caseload, which is met with both understanding and discomfort. Surgeons offered many examples of controversies surrounding the process of determining thresholds as well as the ways in which health insurers use volume as a purchasing criterion. Furthermore, being held accountable for caseload may trigger undesired strategic behaviour, such as unwarranted operations. Volume-based policies also have implications for the survival of low-volume providers and affect patient travel times, although the latter is not necessarily problematic in the Dutch context. Conclusions Surgeons in this study acknowledged that more volume leads to better quality. However, validity issues, undesired strategic behaviour and the ways in which minimum volume standards are established and applied have made surgeons critical of current policy practice. These findings suggest that volume remains a controversial quality measure and causes polarization that is not conducive to a collective effort for quality improvement. We recommend enforcing thresholds that are based on the best achievable level of consensus and assessing additional criteria when passing judgement on quality of care.

  9. A quantitative index of intracranial cerebrospinal fluid distribution in normal pressure hydrocephalus using an MRI-based processing technique.

    PubMed

    Tsunoda, A; Mitsuoka, H; Sato, K; Kanayama, S

    2000-06-01

    Our purpose was to quantify the intracranial cerebrospinal fluid (CSF) volume components using an original MRI-based segmentation technique and to investigate whether a CSF volume index is useful for diagnosis of normal pressure hydrocephalus (NPH). We studied 59 subjects: 16 patients with NPH, 14 young and 13 elderly normal volunteers, and 16 patients with cerebrovascular disease. Images were acquired on a 1.5-T system, using a 3D-fast asymmetrical spin-echo (FASE) method. A region-growing method (RGM) was used to extract the CSF spaces from the FASE images. Ventricular volume (VV) and intracranial CSF volume (ICV) were measured, and a VV/ICV ratio was calculated. Mean VV and VV/ICV ratio were higher in the NPH group than in the other groups, and the differences were statistically significant, whereas the mean ICV value in the NPH group was not significantly increased. Of the 16 patients in the NPH group, 13 had VV/ICV ratios above 30%. In contrast, no subject in the other groups had a VV/ICV ratios higher than 30%. We conclude that these CSF volume parameters, especially the VV/ICV ratio, are useful for the diagnosis of NPH.

  10. Knowledge-based automated technique for measuring total lung volume from CT

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; McNitt-Gray, Michael F.; Mankovich, Nicholas J.; Goldin, Jonathan G.; Aberle, Denise R.

    1996-04-01

    A robust, automated technique has been developed for estimating total lung volumes from chest computed tomography (CT) images. The technique includes a method for segmenting major chest anatomy. A knowledge-based approach automates the calculation of separate volumes of the whole thorax, lungs, and central tracheo-bronchial tree from volumetric CT data sets. A simple, explicit 3D model describes properties such as shape, topology and X-ray attenuation, of the relevant anatomy, which constrain the segmentation of these anatomic structures. Total lung volume is estimated as the sum of the right and left lungs and excludes the central airways. The method requires no operator intervention. In preliminary testing, the system was applied to image data from two healthy subjects and four patients with emphysema who underwent both helical CT and pulmonary function tests. To obtain single breath-hold scans, the healthy subjects were scanned with a collimation of 5 mm and a pitch of 1.5, while the emphysema patients were scanned with collimation of 10 mm at a pitch of 2.0. CT data were reconstructed as contiguous image sets. Automatically calculated volumes were consistent with body plethysmography results (< 10% difference).

  11. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  12. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  13. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  14. Fast and robust segmentation of the striatum using deep convolutional neural networks.

    PubMed

    Choi, Hongyoon; Jin, Kyong Hwan

    2016-12-01

    Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Simulated pressure denaturation thermodynamics of ubiquitin.

    PubMed

    Ploetz, Elizabeth A; Smith, Paul E

    2017-12-01

    Simulations of protein thermodynamics are generally difficult to perform and provide limited information. It is desirable to increase the degree of detail provided by simulation and thereby the potential insight into the thermodynamic properties of proteins. In this study, we outline how to analyze simulation trajectories to decompose conformation-specific, parameter free, thermodynamically defined protein volumes into residue-based contributions. The total volumes are obtained using established methods from Fluctuation Solution Theory, while the volume decomposition is new and is performed using a simple proximity method. Native and fully extended ubiquitin are used as the test conformations. Changes in the protein volumes are then followed as a function of pressure, allowing for conformation-specific protein compressibility values to also be obtained. Residue volume and compressibility values indicate significant contributions to protein denaturation thermodynamics from nonpolar and coil residues, together with a general negative compressibility exhibited by acidic residues. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasquier, David; Lacornerie, Thomas; Vermandel, Maximilien

    Purpose: Target-volume and organ-at-risk delineation is a time-consuming task in radiotherapy planning. The development of automated segmentation tools remains problematic, because of pelvic organ shape variability. We evaluate a three-dimensional (3D), deformable-model approach and a seeded region-growing algorithm for automatic delineation of the prostate and organs-at-risk on magnetic resonance images. Methods and Materials: Manual and automatic delineation were compared in 24 patients using a sagittal T2-weighted (T2-w) turbo spin echo (TSE) sequence and an axial T1-weighted (T1-w) 3D fast-field echo (FFE) or TSE sequence. For automatic prostate delineation, an organ model-based method was used. Prostates without seminal vesicles were delineatedmore » as the clinical target volume (CTV). For automatic bladder and rectum delineation, a seeded region-growing method was used. Manual contouring was considered the reference method. The following parameters were measured: volume ratio (Vr) (automatic/manual), volume overlap (Vo) (ratio of the volume of intersection to the volume of union; optimal value = 1), and correctly delineated volume (Vc) (percent ratio of the volume of intersection to the manually defined volume; optimal value 100). Results: For the CTV, the Vr, Vo, and Vc were 1.13 ({+-}0.1 SD), 0.78 ({+-}0.05 SD), and 94.75 ({+-}3.3 SD), respectively. For the rectum, the Vr, Vo, and Vc were 0.97 ({+-}0.1 SD), 0.78 ({+-}0.06 SD), and 86.52 ({+-}5 SD), respectively. For the bladder, the Vr, Vo, and Vc were 0.95 ({+-}0.03 SD), 0.88 ({+-}0.03 SD), and 91.29 ({+-}3.1 SD), respectively. Conclusions: Our results show that the organ-model method is robust, and results in reproducible prostate segmentation with minor interactive corrections. For automatic bladder and rectum delineation, magnetic resonance imaging soft-tissue contrast enables the use of region-growing methods.« less

  17. Volume estimation using food specific shape templates in mobile image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.

    2011-03-01

    As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.

  18. "Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; vanGelder, Allen

    1999-01-01

    During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.

  19. Comparative Analysis of 2-D Versus 3-D Ultrasound Estimation of the Fetal Adrenal Gland Volume and Prediction of Preterm Birth

    PubMed Central

    Turan, Ozhan M.; Turan, Sifa; Buhimschi, Irina A.; Funai, Edmund F.; Campbell, Katherine H.; Bahtiyar, Ozan M.; Harman, Chris R.; Copel, Joshua A.; Baschat, Ahmet A; Buhimschi, Catalin S.

    2013-01-01

    Objective We aim to test the hypothesis that 2D fetal AGV measurements offer similar volume estimates as volume calculations based on 3D technique Methods Fetal AGV was estimated by 3D ultrasound (VOCAL) in 93 women with signs/symptoms of preterm labor and 73 controls. Fetal AGV was calculated using an ellipsoid formula derived from 2D measurements of the same blocks (0.523× length × width × depth). Comparisons were performed by intra-class correlation coefficient (ICC), coefficient of repeatability, and Bland-Altman method. The cAGV (AGV/fetal weight) was calculated for both methods and compared for prediction of PTB within 7 days. Results Among 168 volumes, there was a significant correlation between 3D and 2D methods (ICC=0.979[95%CI: 0.971-0.984]). The coefficient of repeatability for the 3D was superior to the 2D method (Intra-observer 3D: 30.8, 2D:57.6; inter-observer 3D: 12.2, 2D: 15.6). Based on 2D calculations, a cAGV≥433mm3/kg, was best for prediction of PTB (sensitivity: 75%(95%CI=59-87); specificity: 89%(95%CI=82-94). Sensitivity and specificity for the 3D cAGV (cut-off ≥420mm3/kg) was 85%(95%CI=70-94) and 95%(95%CI=90-98), respectively. In receiver-operating-curve curve analysis, 3D cAGV was superior to 2D cAGV for prediction of PTB (z=1.99, p=0.047). Conclusion 2D volume estimation of fetal adrenal gland using ellipsoid formula cannot replace 3D AGV calculations for prediction of PTB. PMID:22644825

  20. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    NASA Astrophysics Data System (ADS)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  1. Geometric convex cone volume analysis

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Chang, Chein-I.

    2016-05-01

    Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.

  2. Revised tephra volumes for Cascade Range volcanoes

    NASA Astrophysics Data System (ADS)

    Nathenson, Manuel

    2017-07-01

    Isopach maps from tephra eruptions from Mount St. Helens were reported in Carey et al. (1995) and for tephra eruptions from Glacier Peak in Gardner et al. (1998). For exponential thinning, the isopach data only define a single slope on a log thickness versus square root of area plot. Carey et al. (1995) proposed a model that was used to estimate a second slope, and volumes were presented in both studies using this model. A study by Sulpizio (2005) for estimating the second slope and square root of area where the lines intersect involves a systematic analysis of many eruptions to provide correlation equations. The purpose of this paper is to recalculate the volumes of Cascades eruptions and compare results from the two methods. In order to gain some perspective on the methods for estimating the second slope, we use data for thickness versus distance beyond the last isopach that are available for some of the larger eruptions in the Cascades. The thickness versus square root of area method is extended to thickness versus distance by developing an approximate relation between the two assuming elliptical isopachs with the source at one of the foci. Based on the comparisons made between the Carey et al. (1995) and Sulpizio (2005) methods, it is felt that the later method provides a better estimate of the second slope. For Mount St. Helens, the estimates of total volume using the Sulpizio (2005) method are generally smaller than those using the Carey et al. (1995) method. For the volume estimates of Carey et al. (1995), the volume of the May 18, 1980, eruption of Mount St. Helens is smaller than six of the eight previous eruptions. With the new volumes using the Sulpizio (2005) method, the 1980 eruption is smaller in volume than the upper end of the range for only three of the layers (Wn, Ye, and Yn) and is the same size as layer We. Thus the 1980 eruption becomes representative of the mid-range of volumes rather than being in the lower range.

  3. Revised tephra volumes for Cascade Range volcanoes

    USGS Publications Warehouse

    Nathenson, Manuel

    2017-01-01

    Isopach maps from tephra eruptions from Mount St. Helens were reported in Carey et al. (1995) and for tephra eruptions from Glacier Peak in Gardner et al. (1998). For exponential thinning, the isopach data only define a single slope on a log thickness versus square root of area plot. Carey et al. (1995) proposed a model that was used to estimate a second slope, and volumes were presented in both studies using this model. A study by Sulpizio (2005) for estimating the second slope and square root of area where the lines intersect involves a systematic analysis of many eruptions to provide correlation equations. The purpose of this paper is to recalculate the volumes of Cascades eruptions and compare results from the two methods. In order to gain some perspective on the methods for estimating the second slope, we use data for thickness versus distance beyond the last isopach that are available for some of the larger eruptions in the Cascades. The thickness versus square root of area method is extended to thickness versus distance by developing an approximate relation between the two assuming elliptical isopachs with the source at one of the foci. Based on the comparisons made between the Carey et al. (1995) and Sulpizio (2005) methods, it is felt that the later method provides a better estimate of the second slope. For Mount St. Helens, the estimates of total volume using the Sulpizio (2005) method are generally smaller than those using the Carey et al. (1995) method. For the volume estimates of Carey et al. (1995), the volume of the May 18, 1980, eruption of Mount St. Helens is smaller than six of the eight previous eruptions. With the new volumes using the Sulpizio (2005) method, the 1980 eruption is smaller in volume than the upper end of the range for only three of the layers (Wn, Ye, and Yn) and is the same size as layer We. Thus the 1980 eruption becomes representative of the mid-range of volumes rather than being in the lower range.

  4. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  5. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  6. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  7. Alterations in white matter volume and its correlation with neuropsychological scales in patients with Alzheimer's disease: a DARTEL-based voxel-based morphometry study.

    PubMed

    Moon, Chung-Man; Shin, Il-Seon; Jeong, Gwang-Woo

    2017-02-01

    Background Non-invasive imaging markers can be used to diagnose Alzheimer's disease (AD) in its early stages, but an optimized quantification analysis to measure the brain integrity has been less studied. Purpose To evaluate white matter volume change and its correlation with neuropsychological scales in patients with AD using a diffeomorphic anatomical registration through exponentiated lie algebra (DARTEL)-based voxel-based morphometry (VBM). Material and Methods The 21 participants comprised 11 patients with AD and 10 age-matched healthy controls. High-resolution magnetic resonance imaging (MRI) data were processed by VBM analysis based on DARTEL algorithm. Results The patients showed significant white matter volume reductions in the posterior limb of the internal capsule, cerebral peduncle of the midbrain, and parahippocampal gyrus compared to healthy controls. In correlation analysis, the parahippocampal volume was positively correlated with the Korean-mini mental state examination score in AD. Conclusion This study provides an evidence for localized white matter volume deficits in conjunction with cognitive dysfunction in AD. These findings would be helpful to understand the neuroanatomical mechanisms in AD and to robust the diagnostic accuracy for AD.

  8. Application of two direct runoff prediction methods in Puerto Rico

    USGS Publications Warehouse

    Sepulveda, N.

    1997-01-01

    Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.

  9. SU-C-17A-01: MRI-Based Radiotherapy Treatment Planning In Pelvis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S; Cao, Y; Jolly, S

    2014-06-15

    Purpose: To support radiotherapy dose calculation, synthetic CT (MRCT) image volumes need to represent the electron density of tissues with sufficient accuracy. This study compares CT and MRCT for pelvic radiotherapy. Methods: CT and multi-contrast MRI acquired using T1- based Dixon, T2 TSE, and PETRA sequences were acquired on an IRBapproved protocol patient. A previously published method was used to create a MRCT image volume by applying fuzzy classification on T1- weighted and calculated water image volumes (air and fluid voxels were excluded using thresholds applied to PETRA and T2-weighted images). The correlation of pelvic bone intensity between CT andmore » MRCT was investigated. Two treatment plans, based on CT and MRCT, were performed to mimic treatment for: (a) pelvic bone metastasis with a 16MV parallel beam arrangement, and (b) gynecological cancer with 6MV volumetric modulated arc therapy (VMAT) using two full arcs. The CT-calculated fluence maps were used to recalculate doses using the MRCT-derived density grid. The dose-volume histograms and dose distributions were compared. Results: Bone intensities in the MRCT volume correlated linearly with CT intensities up to 800 HU (containing 96% of the bone volume), and then decreased with CT intensity increase (4% volume). There was no significant difference in dose distributions between CT- and MRCTbased plans, except for the rectum and bladder, for which the V45 differed by 15% and 9%, respectively. These differences may be attributed to normal and visualized organ movement and volume variations between CT and MR scans. Conclusion: While MRCT had lower bone intensity in highly-dense bone, this did not cause significant dose deviations from CT due to its small percentage of volume. These results indicate that treatment planning using MRCT could generate comparable dose distributions to that using CT, and further demonstrate the feasibility of using MRI-alone to support Radiation Oncology workflow. NIH R01EB016079.« less

  10. Comparing minimum spanning trees of the Italian stock market using returns and volumes

    NASA Astrophysics Data System (ADS)

    Coletti, Paolo

    2016-12-01

    We have built the network of the top 100 Italian quoted companies in the decade 2001-2011 using four different methods, comparing the resulting minimum spanning trees for methods and industry sectors. Our starting method is based on Person's correlation of log-returns used by several other authors in the last decade. The second one is based on the correlation of symbolized log-returns, the third of log-returns and traded money and the fourth one uses a combination of log-returns with traded money. We show that some sectors correspond to the network's clusters while others are scattered, in particular the trading and apparel sectors. We analyze the different graph's measures for the four methods showing that the introduction of volumes induces larger distances and more homogeneous trees without big clusters.

  11. [A comparison between prostatic volume measured during suprapubic ultrasonography (TAUS) and volume of the enucleated gland after open prostatectomy].

    PubMed

    Szewczyk, Wojciech; Prajsner, Andrzej; Kozina, Janusz; Login, Tomasz; Kaczorowski, Marek

    2004-01-01

    General practitioner very often uses transabdominal ultrasonograpy (TAUS) in order to measure prostatic volume. Using this method it is rather impossible to distinguish between tissue of benign prostatic hyperplasia (BPH) and prostatic tissue which forms so called surgical capsule of BPH. The aim of this study was a comparison of prostatic volume measured during suprapubic (transabdominal) ultrasonography and volume of the enucleated gland after open prostatectomy. Regarding the results authors created a nomogram based on TAUS measurement of the prostate which helps to predict the volume of BPH. They also stated that surgical capsule of the BPH makes about 1/3 of the whole volume of the prostate measured by TAUS.

  12. [Left ventricular volume determination by first-pass radionuclide angiocardiography using a semi-geometric count-based method].

    PubMed

    Kinoshita, S; Suzuki, T; Yamashita, S; Muramatsu, T; Ide, M; Dohi, Y; Nishimura, K; Miyamae, T; Yamamoto, I

    1992-01-01

    A new radionuclide technique for the calculation of left ventricular (LV) volume by the first-pass (FP) method was developed and examined. Using a semi-geometric count-based method, the LV volume can be measured by the following equation: CV = CM/(L/d). V = (CT/CV) x d3 = (CT/CM) x L x d2. (V = LV volume, CV = voxel count, CM = the maximum LV count, CT = the total LV count, L = LV depth where the maximum count was obtained, and d = pixel size.) This theorem was applied to FP LV images obtained in the 30-degree right anterior oblique position. Frame-mode acquisition was performed and the LV end-diastolic maximum count and total count were obtained. The maximum LV depth was obtained as the maximum width of the LV on the FP end-diastolic image, using the assumption that the LV cross-section is circular. These values were substituted in the above equation and the LV end-diastolic volume (FP-EDV) was calculated. A routine equilibrium (EQ) study was done, and the end-diastolic maximum count and total count were obtained. The LV maximum depth was measured on the FP end-diastolic frame, as the maximum length of the LV image. Using these values, the EQ-EDV was calculated and the FP-EDV was compared to the EQ-EDV. The correlation coefficient for these two values was r = 0.96 (n = 23, p less than 0.001), and the standard error of the estimated volume was 10 ml.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Packing and deploying Soft Origami to and from cylindrical volumes with application to automotive airbags

    PubMed Central

    Nelson, Todd G.; Zimmerman, Trent K.; Fernelius, Janette D.; Magleby, Spencer P.; Howell, Larry L.

    2016-01-01

    Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the ‘flasher’ and the ‘inverted-cone fold’, for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver’s side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes. PMID:27703707

  14. Packing and deploying Soft Origami to and from cylindrical volumes with application to automotive airbags

    NASA Astrophysics Data System (ADS)

    Bruton, Jared T.; Nelson, Todd G.; Zimmerman, Trent K.; Fernelius, Janette D.; Magleby, Spencer P.; Howell, Larry L.

    2016-09-01

    Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the `flasher' and the `inverted-cone fold', for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver's side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes.

  15. ρ-VOF: An interface sharpening method for gas-liquid flow simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jiantao; Liu, Gang; Jiang, Xiong; Mou, Bin

    2018-05-01

    The study on simulation of compressible gas-liquid flow remains open. Popular methods are either confined to incompressible flow regime, or inevitably induce smear of the free interface. A new finite volume method for compressible two-phase flow simulation is contributed for this subject. First, the “heterogeneous equilibrium” assumption is introduced to the control volume, by hiring free interface reconstruction technology, the distribution of each component in the control volume is achieved. Next, AUSM+-up (advection upstream splitting method) scheme is employed to calculate the convective fluxes and pressure fluxes, with the contact discontinuity characteristic considered, followed by the update of the whole flow field. The new method features on density-based pattern and interface reconstruction technology from VOF (volume of fluid), thus we name it “ρ-VOF method”. Inherited from AUSM families and VOF, ρ-VOF behaves as an all-speed method, capable of simulating shock in gas-liquid flow, and preserving the sharpness of the free interface. Gas-liquid shock tube is simulated to evaluate the method, from which good agreement is obtained between the predicted results and those of the cited literature, meanwhile, sharper free interface is identified. Finally, the capability and validity of ρ-VOF method can be concluded in compressible gas-liquid flow simulation.

  16. Packing and deploying Soft Origami to and from cylindrical volumes with application to automotive airbags.

    PubMed

    Bruton, Jared T; Nelson, Todd G; Zimmerman, Trent K; Fernelius, Janette D; Magleby, Spencer P; Howell, Larry L

    2016-09-01

    Packing soft-sheet materials of approximately zero bending stiffness using Soft Origami (origami patterns applied to soft-sheet materials) into cylindrical volumes and their deployment via mechanisms or internal pressure (inflation) is of interest in fields including automobile airbags, deployable heart stents, inflatable space habitats, and dirigible and parachute packing. This paper explores twofold patterns, the 'flasher' and the 'inverted-cone fold', for packing soft-sheet materials into cylindrical volumes. Two initial packing methods and mechanisms are examined for each of the flasher and inverted-cone fold patterns. An application to driver's side automobile airbags is performed, and deployment tests are completed to compare the influence of packing method and origami pattern on deployment performance. Following deployment tests, two additional packing methods for the inverted-cone fold pattern are explored and applied to automobile airbags. It is shown that modifying the packing method (using different methods to impose the same base pattern on the soft-sheet material) can lead to different deployment performance. In total, two origami patterns and six packing methods are examined, and the benefits of using Soft Origami patterns and packing methods are discussed. Soft Origami is presented as a viable method for efficiently packing soft-sheet materials into cylindrical volumes.

  17. A coupling strategy for nonlocal and local diffusion models with mixed volume constraints and boundary conditions

    DOE PAGES

    D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...

    2015-12-21

    We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less

  18. An inexpensive and portable microvolumeter for rapid evaluation of biological samples.

    PubMed

    Douglass, John K; Wcislo, William T

    2010-08-01

    We describe an improved microvolumeter (MVM) for rapidly measuring volumes of small biological samples, including live zooplankton, embryos, and small animals and organs. Portability and low cost make this instrument suitable for widespread use, including at remote field sites. Beginning with Archimedes' principle, which states that immersing an arbitrarily shaped sample in a fluid-filled container displaces an equivalent volume, we identified procedures that maximize measurement accuracy and repeatability across a broad range of absolute volumes. Crucial steps include matching the overall configuration to the size of the sample, using reflected light to monitor fluid levels precisely, and accounting for evaporation during measurements. The resulting precision is at least 100 times higher than in previous displacement-based methods. Volumes are obtained much faster than by traditional histological or confocal methods and without shrinkage artifacts due to fixation or dehydration. Calibrations using volume standards confirmed accurate measurements of volumes as small as 0.06 microL. We validated the feasibility of evaluating soft-tissue samples by comparing volumes of freshly dissected ant brains measured with the MVM and by confocal reconstruction.

  19. Conservative and bounded volume-of-fluid advection on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ivey, Christopher B.; Moin, Parviz

    2017-12-01

    This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.

  20. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  1. Individualized Nonadaptive and Online-Adaptive Intensity-Modulated Radiotherapy Treatment Strategies for Cervical Cancer Patients Based on Pretreatment Acquired Variable Bladder Filling Computed Tomography Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondar, M.L., E-mail: m.bondar@erasmusmc.nl; Hoogeman, M.S.; Mens, J.W.

    2012-08-01

    Purpose: To design and evaluate individualized nonadaptive and online-adaptive strategies based on a pretreatment established motion model for the highly deformable target volume in cervical cancer patients. Methods and Materials: For 14 patients, nine to ten variable bladder filling computed tomography (CT) scans were acquired at pretreatment and after 40 Gy. Individualized model-based internal target volumes (mbITVs) accounting for the cervix and uterus motion due to bladder volume changes were generated by using a motion-model constructed from two pretreatment CT scans (full and empty bladder). Two individualized strategies were designed: a nonadaptive strategy, using an mbITV accounting for the full-rangemore » of bladder volume changes throughout the treatment; and an online-adaptive strategy, using mbITVs of bladder volume subranges to construct a library of plans. The latter adapts the treatment online by selecting the plan-of-the-day from the library based on the measured bladder volume. The individualized strategies were evaluated by the seven to eight CT scans not used for mbITVs construction, and compared with a population-based approach. Geometric uniform margins around planning cervix-uterus and mbITVs were determined to ensure adequate coverage. For each strategy, the percentage of the cervix-uterus, bladder, and rectum volumes inside the planning target volume (PTV), and the clinical target volume (CTV)-to-PTV volume (volume difference between PTV and CTV) were calculated. Results: The margin for the population-based approach was 38 mm and for the individualized strategies was 7 to 10 mm. Compared with the population-based approach, the individualized nonadaptive strategy decreased the CTV-to-PTV volume by 48% {+-} 6% and the percentage of bladder and rectum inside the PTV by 5% to 45% and 26% to 74% (p < 0.001), respectively. Replacing the individualized nonadaptive strategy by an online-adaptive, two-plan library further decreased the percentage of bladder and rectum inside the PTV (0% to 10% and -1% to 9%; p < 0.004) and the CTV-to-PTV volume (4-96 ml). Conclusions: Compared with population-based margins, an individualized PTV results in better organ-at-risk sparing. Online-adaptive radiotherapy further improves organ-at-risk sparing.« less

  2. Dual-Energy Micro-CT Functional Imaging of Primary Lung Cancer in Mice Using Gold and Iodine Nanoparticle Contrast Agents: A Validation Study

    PubMed Central

    Ashton, Jeffrey R.; Clark, Darin P.; Moding, Everett J.; Ghaghada, Ketan; Kirsch, David G.; West, Jennifer L.; Badea, Cristian T.

    2014-01-01

    Purpose To provide additional functional information for tumor characterization, we investigated the use of dual-energy computed tomography for imaging murine lung tumors. Tumor blood volume and vascular permeability were quantified using gold and iodine nanoparticles. This approach was compared with a single contrast agent/single-energy CT method. Ex vivo validation studies were performed to demonstrate the accuracy of in vivo contrast agent quantification by CT. Methods Primary lung tumors were generated in LSL-KrasG12D; p53FL/FL mice. Gold nanoparticles were injected, followed by iodine nanoparticles two days later. The gold accumulated in tumors, while the iodine provided intravascular contrast. Three dual-energy CT scans were performed–two for the single contrast agent method and one for the dual contrast agent method. Gold and iodine concentrations in each scan were calculated using a dual-energy decomposition. For each method, the tumor fractional blood volume was calculated based on iodine concentration, and tumor vascular permeability was estimated based on accumulated gold concentration. For validation, the CT-derived measurements were compared with histology and inductively-coupled plasma optical emission spectroscopy measurements of gold concentrations in tissues. Results Dual-energy CT enabled in vivo separation of gold and iodine contrast agents and showed uptake of gold nanoparticles in the spleen, liver, and tumors. The tumor fractional blood volume measurements determined from the two imaging methods were in agreement, and a high correlation (R2 = 0.81) was found between measured fractional blood volume and histology-derived microvascular density. Vascular permeability measurements obtained from the two imaging methods agreed well with ex vivo measurements. Conclusions Dual-energy CT using two types of nanoparticles is equivalent to the single nanoparticle method, but allows for measurement of fractional blood volume and permeability with a single scan. As confirmed by ex vivo methods, CT-derived nanoparticle concentrations are accurate. This method could play an important role in lung tumor characterization by CT. PMID:24520351

  3. Highly efficient volume hologram multiplexing in thick dye-doped jelly-like gelatin.

    PubMed

    Katarkevich, Vasili M; Rubinov, Anatoli N; Efendiev, Terlan Sh

    2014-08-01

    Dye-doped jelly-like gelatin is a thick-layer self-developing photosensitive medium that allows single and multiplexed volume phase holograms to be successfully recorded using pulsed laser radiation. In this Letter, we present a method for multiplexed recording of volume holograms in a dye-doped jelly-like gelatin, which provides significant increase in their diffraction efficiency. The method is based on the recovery of the photobleached dye molecule concentration in the hologram recording zone of gel, thanks to molecule diffusion from other unexposed gel areas. As an example, an optical recording of a multiplexed hologram consisting of three superimposed Bragg gratings with mean values of the diffraction efficiency and angular selectivity of ∼75% and ∼21', respectively, is demonstrated by using the proposed method.

  4. Generalized source Finite Volume Method for radiative transfer equation in participating media

    NASA Astrophysics Data System (ADS)

    Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min

    2017-03-01

    Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.

  5. Breast Volume Measurement by Recycling the Data Obtained From 2 Routine Modalities, Mammography and Magnetic Resonance Imaging.

    PubMed

    Itsukage, Shizu; Sowa, Yoshihiro; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki

    2017-01-01

    Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes' principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging.

  6. Breast Volume Measurement by Recycling the Data Obtained From 2 Routine Modalities, Mammography and Magnetic Resonance Imaging

    PubMed Central

    Itsukage, Shizu; Goto, Mariko; Taguchi, Tetsuya; Numajiri, Toshiaki

    2017-01-01

    Objective: Preoperative prediction of breast volume is important in the planning of breast reconstructive surgery. In this study, we prospectively estimated the accuracy of measurement of breast volume using data from 2 routine modalities, mammography and magnetic resonance imaging, by comparison with volumes of mastectomy specimens. Methods: The subjects were 22 patients (24 breasts) who were scheduled to undergo total mastectomy for breast cancer. Preoperatively, magnetic resonance imaging volume measurement was performed using a medical imaging system and the mammographic volume was calculated using a previously proposed formula. Volumes of mastectomy specimens were measured intraoperatively using a method based on Archimedes’ principle and Newton's third law. Results: The average breast volumes measured on magnetic resonance imaging and mammography were 318.47 ± 199.4 mL and 325.26 ± 217.36 mL, respectively. The correlation coefficients with mastectomy specimen volumes were 0.982 for magnetic resonance imaging and 0.911 for mammography. Conclusions: Breast volume measurement using magnetic resonance imaging was highly accurate but requires data analysis software. In contrast, breast volume measurement with mammography requires only a simple formula and is sufficiently accurate, although the accuracy was lower than that obtained with magnetic resonance imaging. These results indicate that mammography could be an alternative modality for breast volume measurement as a substitute for magnetic resonance imaging. PMID:29308107

  7. An initial abstraction and constant loss model, and methods for estimating unit hydrographs, peak streamflows, and flood volumes for urban basins in Missouri

    USGS Publications Warehouse

    Huizinga, Richard J.

    2014-01-01

    The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.

  8. German Basic Course. Volume II, Lessons 16-25.

    ERIC Educational Resources Information Center

    Defense Language Inst., Washington, DC.

    This is the first volume of the Intermediate Phase (lessons 16-92) of the German Basic Course developed by the Defense Language Institute. The course, normally requiring 19 weeks of training, focuses on developing mastery of structural elements of German through the audiolingual method. Dialogues are based on life situations and progress towards…

  9. Determining blood and plasma volumes using bioelectrical response spectroscopy

    NASA Technical Reports Server (NTRS)

    Siconolfi, S. F.; Nusynowitz, M. L.; Suire, S. S.; Moore, A. D. Jr; Leig, J.

    1996-01-01

    We hypothesized that an electric field (inductance) produced by charged blood components passing through the many branches of arteries and veins could assess total blood volume (TBV) or plasma volume (PV). Individual (N = 29) electrical circuits (inductors, two resistors, and a capacitor) were determined from bioelectrical response spectroscopy (BERS) using a Hewlett Packard 4284A Precision LCR Meter. Inductance, capacitance, and resistance from the circuits of 19 subjects modeled TBV (sum of PV and computed red cell volume) and PV (based on 125I-albumin). Each model (N = 10, cross validation group) had good validity based on 1) mean differences (-2.3 to 1.5%) between the methods that were not significant and less than the propagated errors (+/- 5.2% for TBV and PV), 2) high correlations (r > 0.92) with low SEE (< 7.7%) between dilution and BERS assessments, and 3) Bland-Altman pairwise comparisons that indicated "clinical equivalency" between the methods. Given the limitation of this study (10 validity subjects), we concluded that BERS models accurately assessed TBV and PV. Further evaluations of the models' validities are needed before they are used in clinical or research settings.

  10. Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.

    PubMed

    Hatt, Mathieu; Cheze-le Rest, Catherine; van Baardwijk, Angela; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris

    2011-11-01

    The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred.

  11. Evaluation of the 95% limits of agreement of the volumes of 5-year clinically stable solid nodules for the development of a follow-up system for indeterminate solid nodules in CT lung cancer screening.

    PubMed

    Kakinuma, Ryutaro; Muramatsu, Yukio; Yamamichi, Junta; Gomi, Shiho; Oubel, Estanislao; Moriyama, Noriyuki

    2018-01-01

    This study sought to evaluate the 95% limits of agreement of the volumes of 5-year clinically stable solid nodules for the development of a follow-up system for indeterminate solid nodules. The volumes of 226 solid nodules that had been clinically stable for 5 years were measured in 186 patients (53 female never-smokers, 36 male never-smokers, 51 males with <30 pack-years, and 46 males with ≥30 pack-years) using a three-dimensional semiautomated method. Volume changes were evaluated using three methods: percent change, proportional change and growth rate. The 95% limits of agreement were evaluated using the Bland-Altman method. The 95% limits of agreement were as follows: range of percent change, from ±34.5% to ±37.8%; range of proportional change, from ±34.1% to ±36.8%; and range of growth rate, from ±39.2% to ±47.4%. Percent change-based, proportional change-based, and growth rate-based diagnoses of an increase or decrease in ten solid nodules were made at a mean of 302±402, 367±455, and 329±496 days, respectively, compared with a clinical diagnosis made at 809±616 days (P<0.05). The 95% limits of agreement for volume change in 5-year stable solid nodules may enable the detection of an increase or decrease in the solid nodule at an earlier stage than that enabled by a clinical diagnosis, possibly contributing to the development of a follow-up system for reducing the number of additional Computed tomography (CT) scans performed during the follow-up period.

  12. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  13. Calibration of a semi-automated segmenting method for quantification of adipose tissue compartments from magnetic resonance images of mice.

    PubMed

    Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M

    2013-11-01

    To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.

  14. Optimized volume models of earthquake-triggered landslides

    PubMed Central

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-01-01

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide “volume-area” power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 1010 m3 in deposit materials and 1 × 1010 m3 in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship. PMID:27404212

  15. Optimized volume models of earthquake-triggered landslides.

    PubMed

    Xu, Chong; Xu, Xiwei; Shen, Lingling; Yao, Qi; Tan, Xibin; Kang, Wenjun; Ma, Siyuan; Wu, Xiyan; Cai, Juntao; Gao, Mingxing; Li, Kang

    2016-07-12

    In this study, we proposed three optimized models for calculating the total volume of landslides triggered by the 2008 Wenchuan, China Mw 7.9 earthquake. First, we calculated the volume of each deposit of 1,415 landslides triggered by the quake based on pre- and post-quake DEMs in 20 m resolution. The samples were used to fit the conventional landslide "volume-area" power law relationship and the 3 optimized models we proposed, respectively. Two data fitting methods, i.e. log-transformed-based linear and original data-based nonlinear least square, were employed to the 4 models. Results show that original data-based nonlinear least square combining with an optimized model considering length, width, height, lithology, slope, peak ground acceleration, and slope aspect shows the best performance. This model was subsequently applied to the database of landslides triggered by the quake except for two largest ones with known volumes. It indicates that the total volume of the 196,007 landslides is about 1.2 × 10(10) m(3) in deposit materials and 1 × 10(10) m(3) in source areas, respectively. The result from the relationship of quake magnitude and entire landslide volume related to individual earthquake is much less than that from this study, which reminds us the necessity to update the power-law relationship.

  16. A simple formula for predicting claw volume of cattle.

    PubMed

    Scott, T D; Naylor, J M; Greenough, P R

    1999-11-01

    The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.

  17. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.

    PubMed

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-06-27

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.

  18. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants

    PubMed Central

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-01-01

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949

  19. Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.

    PubMed

    Manjón, José V; Tohka, Jussi; Robles, Montserrat

    2010-11-01

    This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  1. Feasibility study on image guided patient positioning for stereotactic body radiation therapy of liver malignancies guided by liver motion.

    PubMed

    Heinz, Christian; Gerum, Sabine; Freislederer, Philipp; Ganswindt, Ute; Roeder, Falk; Corradini, Stefanie; Belka, Claus; Niyazi, Maximilian

    2016-06-27

    Fiducial markers are the superior method to compensate for interfractional motion in liver SBRT. However this method is invasive and thereby limits its application range. In this retrospective study, the compensation method for the interfractional motion using fiducial markers (gold standard) was compared to a new non-invasive approach, which does rely on the organ motion of the liver and the relative tumor position within this volume. We analyzed six patients (3 m, 3f) treated with SBRT in 2014. After fiducial marker implantation, all patients received a treatment CT (free breathing, without abdominal compression) and a 4D-CT (consisting of 10 respiratory phases). For all patients the gross tumor volumes (GTVs), internal target volume (ITV), planning target volume (PTV), internal marker target volumes (IMTVs) and the internal liver target volume (ILTV) were delineated based on the CT and 4D-CT images. CBCT imaging was used for the standard treatment setup based on the fiducial markers. According to the patient coordinates the 3 translational compensation values (t x , t y , t z ) for the interfractional motion were calculated by matching the blurred fiducial markers with the corresponding IMTV structures. 4 observers were requested to recalculate the translational compensation values for each CBCT (31) based on the ILTV structures. The differences of the translational compensation values between the IMTV and ILTV approach were analyzed. The magnitude of the mean absolute 3D registration error with regard to the gold standard overall patients and observers was 0.50 cm ± 0.28 cm. Individual registration errors up to 1.3 cm were observed. There was no significant overall linear correlation between the respiratory motion and the registration error of the ILTV approach. Two different methods to calculate the translational compensation values for interfractional motion in stereotactic liver therapy were evaluated. The registration accuracy of the ILTV approach is mainly limited by the non-rigid behavior of the liver and the individual registration experience of the observer. The ILTV approach lacks the accuracy that would be desired for stereotactic radiotherapy of the liver.

  2. Woody debris volume depletion through decay: implications for biomass and carbon accounting

    USGS Publications Warehouse

    Fraver, Shawn; Milo, Amy M.; Bradford, John B.; D'Amato, Anthony W.; Kenefic, Laura; Palik, Brian J.; Woodall, Christopher W.; Brissette, John

    2013-01-01

    Woody debris decay rates have recently received much attention because of the need to quantify temporal changes in forest carbon stocks. Published decay rates, available for many species, are commonly used to characterize deadwood biomass and carbon depletion. However, decay rates are often derived from reductions in wood density through time, which when used to model biomass and carbon depletion are known to underestimate rate loss because they fail to account for volume reduction (changes in log shape) as decay progresses. We present a method for estimating changes in log volume through time and illustrate the method using a chronosequence approach. The method is based on the observation, confirmed herein, that decaying logs have a collapse ratio (cross-sectional height/width) that can serve as a surrogate for the volume remaining. Combining the resulting volume loss with concurrent changes in wood density from the same logs then allowed us to quantify biomass and carbon depletion for three study species. Results show that volume, density, and biomass follow distinct depletion curves during decomposition. Volume showed an initial lag period (log dimensions remained unchanged), even while wood density was being reduced. However, once volume depletion began, biomass loss (the product of density and volume depletion) occurred much more rapidly than density alone. At the temporal limit of our data, the proportion of the biomass remaining was roughly half that of the density remaining. Accounting for log volume depletion, as demonstrated in this study, provides a comprehensive characterization of deadwood decomposition, thereby improving biomass-loss and carbon-accounting models.

  3. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images

    PubMed Central

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He

    2018-01-01

    Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424

  4. A method of evaluating crown fuels in forest stands.

    Treesearch

    Rodney W. Sando; Charles H. Wick

    1972-01-01

    A method of describing the crown fuels in a forest fuel complex based on crown weight and crown volume was developed. A computer program is an integral part of the method. Crown weight data are presented in graphical form and are separated into hardwood and coniferous fuels. The fuel complex is described using total crown weight per acre, mean height to the base of...

  5. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  6. SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, R; Heron, D; Huq, M

    2015-06-15

    Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less

  7. An index-flood model for deficit volumes assessment

    NASA Astrophysics Data System (ADS)

    Strnad, Filip; Moravec, Vojtěch; Hanel, Martin

    2017-04-01

    The estimation of return periods of hydrological extreme events and the evaluation of risks related to such events are objectives of many water resources studies. The aim of this study is to develop statistical model for drought indices using extreme value theory and index-flood method and to use this model for estimation of return levels of maximum deficit volumes of total runoff and baseflow. Deficit volumes for hundred and thirty-three catchments in the Czech Republic for the period 1901-2015 simulated by a hydrological model Bilan are considered. The characteristics of simulated deficit periods (severity, intensity and length) correspond well to those based on observed data. It is assumed that annual maximum deficit volumes in each catchment follow the generalized extreme value (GEV) distribution. The catchments are divided into three homogeneous regions considering long term mean runoff, potential evapotranspiration and base flow. In line with the index-flood method it is further assumed that the deficit volumes within each homogeneous region are identically distributed after scaling with a site-specific factor. The goodness-of-fit of the statistical model is assessed by Anderson-Darling statistics. For the estimation of critical values of the test several resampling strategies allowing for appropriate handling of years without drought are presented. Finally the significance of the trends in the deficit volumes is assessed by a likelihood ratio test.

  8. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    PubMed

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  9. Examining Brain Morphometry Associated with Self-Esteem in Young Adults Using Multilevel-ROI-Features-Based Classification Method

    PubMed Central

    Peng, Bo; Lu, Jieru; Saxena, Aditya; Zhou, Zhiyong; Zhang, Tao; Wang, Suhong; Dai, Yakang

    2017-01-01

    Purpose: This study is to exam self-esteem related brain morphometry on brain magnetic resonance (MR) images using multilevel-features-based classification method. Method: The multilevel region of interest (ROI) features consist of two types of features: (i) ROI features, which include gray matter volume, white matter volume, cerebrospinal fluid volume, cortical thickness, and cortical surface area, and (ii) similarity features, which are based on similarity calculation of cortical thickness between ROIs. For each feature type, a hybrid feature selection method, comprising of filter-based and wrapper-based algorithms, is used to select the most discriminating features. ROI features and similarity features are integrated by using multi-kernel support vector machines (SVMs) with appropriate weighting factor. Results: The classification performance is improved by using multilevel ROI features with an accuracy of 96.66%, a specificity of 96.62%, and a sensitivity of 95.67%. The most discriminating ROI features that are related to self-esteem spread over occipital lobe, frontal lobe, parietal lobe, limbic lobe, temporal lobe, and central region, mainly involving white matter and cortical thickness. The most discriminating similarity features are distributed in both the right and left hemisphere, including frontal lobe, occipital lobe, limbic lobe, parietal lobe, and central region, which conveys information of structural connections between different brain regions. Conclusion: By using ROI features and similarity features to exam self-esteem related brain morphometry, this paper provides a pilot evidence that self-esteem is linked to specific ROIs and structural connections between different brain regions. PMID:28588470

  10. Examining Brain Morphometry Associated with Self-Esteem in Young Adults Using Multilevel-ROI-Features-Based Classification Method.

    PubMed

    Peng, Bo; Lu, Jieru; Saxena, Aditya; Zhou, Zhiyong; Zhang, Tao; Wang, Suhong; Dai, Yakang

    2017-01-01

    Purpose: This study is to exam self-esteem related brain morphometry on brain magnetic resonance (MR) images using multilevel-features-based classification method. Method: The multilevel region of interest (ROI) features consist of two types of features: (i) ROI features, which include gray matter volume, white matter volume, cerebrospinal fluid volume, cortical thickness, and cortical surface area, and (ii) similarity features, which are based on similarity calculation of cortical thickness between ROIs. For each feature type, a hybrid feature selection method, comprising of filter-based and wrapper-based algorithms, is used to select the most discriminating features. ROI features and similarity features are integrated by using multi-kernel support vector machines (SVMs) with appropriate weighting factor. Results: The classification performance is improved by using multilevel ROI features with an accuracy of 96.66%, a specificity of 96.62%, and a sensitivity of 95.67%. The most discriminating ROI features that are related to self-esteem spread over occipital lobe, frontal lobe, parietal lobe, limbic lobe, temporal lobe, and central region, mainly involving white matter and cortical thickness. The most discriminating similarity features are distributed in both the right and left hemisphere, including frontal lobe, occipital lobe, limbic lobe, parietal lobe, and central region, which conveys information of structural connections between different brain regions. Conclusion: By using ROI features and similarity features to exam self-esteem related brain morphometry, this paper provides a pilot evidence that self-esteem is linked to specific ROIs and structural connections between different brain regions.

  11. Acoustic measurement of bubble size and position in a piezo driven inkjet printhead

    NASA Astrophysics Data System (ADS)

    van der Bos, Arjan; Jeurissen, Roger; de Jong, Jos; Stevens, Richard; Versluis, Michel; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Lohse, Detlef

    2008-11-01

    A bubble can be entrained in the ink channel of a piezo-driven inkjet printhead, where it grows by rectified diffusion. If large enough, the bubble counteracts the pressure buildup at the nozzle, resulting in nozzle failure. Here an acoustic sizing method for the volume and position of the bubble is presented. The bubble response is detected by the piezo actuator itself, operating in a sensor mode. The method used to determine the volume and position of the bubble is based on a linear model in which the interaction between the bubble and the channel are included. This model predicts the acoustic signal for a given position and volume of the bubble. The inverse problem is to infer the position and volume of the bubble from the measured acoustic signal. By solving it, we can thus acoustically measure size and position of the bubble. The validity of the presented method is supported by time-resolved optical observations of the dynamics of the bubble within an optically accessible ink-jet channel.

  12. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  13. The Analysis for Energy Consumption of Marine Air Conditioning System Based on VAV and VWV

    NASA Astrophysics Data System (ADS)

    Xu, Sai Feng; Yang, Xing Lin; Le, Zou Ying

    2018-06-01

    For ocean-going vessels sailing in different areas on the sea, the change of external environment factors will cause frequent changes in load, traditional ship air-conditioning system is usually designed with a fixed cooling capacity, this design method causes serious waste of resources. A new type of sea-based air conditioning system is proposed in this paper, which uses the sea-based source heat pump system, combined with variable air volume, variable water technology. The multifunctional cabins' dynamic loads for a ship navigating in a typical Eurasian route were calculated based on Simulink. The model can predict changes in full voyage load. Based on the simulation model, the effects of variable air volume and variable water volume on the energy consumption of the air-conditioning system are analyzed. The results show that: When the VAV is coupled with the VWV, the energy saving rate is 23.2%. Therefore, the application of variable air volume and variable water technology to marine air conditioning systems can achieve economical and energy saving advantages.

  14. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  15. Method of making improved gas storage carbon with enhanced thermal conductivity

    DOEpatents

    Burchell, Timothy D [Oak Ridge, TN; Rogers, Michael R [Knoxville, TN

    2002-11-05

    A method of making an adsorbent carbon fiber based monolith having improved methane gas storage capabilities is disclosed. Additionally, the monolithic nature of the storage carbon allows it to exhibit greater thermal conductivity than conventional granular activated carbon or powdered activated carbon storage beds. The storage of methane gas is achieved through the process of physical adsorption in the micropores that are developed in the structure of the adsorbent monolith. The disclosed monolith is capable of storing greater than 150 V/V of methane [i.e., >150 STP (101.325 KPa, 298K) volumes of methane per unit volume of storage vessel internal volume] at a pressure of 3.5 MPa (500 psi).

  16. Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.

    PubMed

    Davidson, Shaun; Pretty, Chris; Pironet, Antoine; Desaive, Thomas; Janssen, Nathalie; Lambermont, Bernard; Morimont, Philippe; Chase, J Geoffrey

    2017-01-01

    This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd), an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume) and its end-systolic equivalent (stroke volume vs end-systolic volume), developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0) to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd) for each subject (no more than 7.8% variation). Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to allow for estimation of Vd in a clinical environment.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of themore » tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.« less

  18. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  19. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  20. Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.

    PubMed

    Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P

    2002-07-01

    Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.

  1. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data.

    PubMed

    Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna

    2018-01-01

    Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.

  2. An efficient solid modeling system based on a hand-held 3D laser scan device

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.

  3. A new electric method for non-invasive continuous monitoring of stroke volume and ventricular volume-time curves

    PubMed Central

    2012-01-01

    Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP) is presented, that measures the average stroke volume (SV) for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition) with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo). These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds). The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling caused a pattern over the thorax quite distinct from that of atrial filling. The in-vivo tests of the HCP with LVot Doppler resulted in a Pearson’s correlation of R = 0.892, and Bland-Altman plotting of SV yielded a mean bias of -1.6 ml and 2SD =14.8 ml. Conclusions The results indicate that the HCP was able to track the changes in ventricular stroke volume reliably. Furthermore, the HCP produced ventricular volume-time curves that were consistent with the literature, and may be a diagnostic tool as well. PMID:22900831

  4. Control theory based airfoil design for potential flow and a finite volume discretization

    NASA Technical Reports Server (NTRS)

    Reuther, J.; Jameson, A.

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.

  5. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  6. Tumor Volume Estimation and Quasi-Continuous Administration for Most Effective Bevacizumab Therapy

    PubMed Central

    Sápi, Johanna; Kovács, Levente; Drexler, Dániel András; Kocsis, Pál; Gajári, Dávid; Sápi, Zoltán

    2015-01-01

    Background Bevacizumab is an exogenous inhibitor which inhibits the biological activity of human VEGF. Several studies have investigated the effectiveness of bevacizumab therapy according to different cancer types but these days there is an intense debate on its utility. We have investigated different methods to find the best tumor volume estimation since it creates the possibility for precise and effective drug administration with a much lower dose than in the protocol. Materials and Methods We have examined C38 mouse colon adenocarcinoma and HT-29 human colorectal adenocarcinoma. In both cases, three groups were compared in the experiments. The first group did not receive therapy, the second group received one 200 μg bevacizumab dose for a treatment period (protocol-based therapy), and the third group received 1.1 μg bevacizumab every day (quasi-continuous therapy). Tumor volume measurement was performed by digital caliper and small animal MRI. The mathematical relationship between MRI-measured tumor volume and mass was investigated to estimate accurate tumor volume using caliper-measured data. A two-dimensional mathematical model was applied for tumor volume evaluation, and tumor- and therapy-specific constants were calculated for the three different groups. The effectiveness of bevacizumab administration was examined by statistical analysis. Results In the case of C38 adenocarcinoma, protocol-based treatment did not result in significantly smaller tumor volume compared to the no treatment group; however, there was a significant difference between untreated mice and mice who received quasi-continuous therapy (p = 0.002). In the case of HT-29 adenocarcinoma, the daily treatment with one-twelfth total dose resulted in significantly smaller tumors than the protocol-based treatment (p = 0.038). When the tumor has a symmetrical, solid closed shape (typically without treatment), volume can be evaluated accurately from caliper-measured data with the applied two-dimensional mathematical model. Conclusion Our results provide a theoretical background for a much more effective bevacizumab treatment using optimized administration. PMID:26540189

  7. Effect of water volume based on water absorption and mixing time on physical properties of tapioca starch – wheat composite bread

    NASA Astrophysics Data System (ADS)

    Prameswari, I. K.; Manuhara, G. J.; Amanto, B. S.; Atmaka, W.

    2018-05-01

    Tapioca starch application in bread processing change water absorption level by the dough, while sufficient mixing time makes the optimal water absorption. This research aims to determine the effect of variations in water volume and mixing time on physical properties of tapioca starch – wheat composite bread and the best method for the composite bread processing. This research used Complete Randomized Factorial Design (CRFD) with two factors: variations of water volume (111,8 ml, 117,4 ml, 123 ml) and mixing time (16 minutes, 17 minutes 36 seconds, 19 minutes 12 seconds). The result showed that water volume significantly affected on dough volume, bread volume and specific volume, baking expansion, and crust thickness. Mixing time significantly affected on dough volume and specific volume, bread volume and specific volume, baking expansion, bread height, and crust thickness. While the combination of water volume and mixing time significantly affected for all physical properties parameters except crust thickness.

  8. Neuroimaging correlates of parent ratings of working memory in typically developing children

    PubMed Central

    Mahone, E. Mark; Martin, Rebecca; Kates, Wendy R.; Hay, Trisha; Horská, Alena

    2009-01-01

    The purpose of the present study was to investigate construct validity of parent ratings of working memory in children, using a multi-trait/multi-method design including neuroimaging, rating scales, and performance-based measures. Thirty-five typically developing children completed performance-based tests of working memory and nonexecutive function (EF) skills, received volumetric MRI, and were rated by parents on both EF-specific and broad behavior rating scales. After controlling for total cerebral volume and age, parent ratings of working memory were significantly correlated with frontal gray, but not temporal, parietal, or occipital gray, or any lobar white matter volumes. Performance-based measures of working memory were also moderately correlated with frontal lobe gray matter volume; however, non-EF parent ratings and non-EF performance-based measures were not correlated with frontal lobe volumes. Results provide preliminary support for the convergent and discriminant validity of parent ratings of working memory, and emphasize their utility in exploring brain–behavior relationships in children. Rating scales that directly examine EF skills may potentially have ecological validity, not only for “everyday” function, but also as correlates of brain volume. PMID:19128526

  9. Accuracy of volumetric measurement of simulated root resorption lacunas based on cone beam computed tomography.

    PubMed

    Wang, Y; He, S; Guo, Y; Wang, S; Chen, S

    2013-08-01

    To evaluate the accuracy of volumetric measurement of simulated root resorption cavities based on cone beam computed tomography (CBCT), in comparison with that of Micro-computed tomography (Micro-CT) which served as the reference. The State Key Laboratory of Oral Diseases at Sichuan University. Thirty-two bovine teeth were included for standardized CBCT scanning and Micro-CT scanning before and after the simulation of different degrees of root resorption. The teeth were divided into three groups according to the depths of the root resorption cavity (group 1: 0.15, 0.2, 0.3 mm; group 2: 0.6, 1.0 mm; group 3: 1.5, 2.0, 3.0 mm). Each depth included four specimens. Differences in tooth volume before and after simulated root resorption were then calculated from CBCT and Micro-CT scans, respectively. The overall between-method agreement of the measurements was evaluated using the concordance correlation coefficient (CCC). For the first group, the average volume of resorption cavity was 1.07 mm(3) , and the between-method agreement of measurement for the volume changes was low (CCC = 0.098). For the second and third groups, the average volumes of resorption cavities were 3.47 and 6.73 mm(3) respectively, and the between-method agreements were good (CCC = 0.828 and 0.895, respectively). The accuracy of 3-D quantitative volumetric measurement of simulated root resorption based on CBCT was fairly good in detecting simulated resorption cavities larger than 3.47 mm(3), while it was not sufficient for measuring resorption cavities smaller than 1.07 mm(3) . This method could be applied in future studies of root resorption although further studies are required to improve its accuracy. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Target volume definition for 18F-FDG PET-positive lymph nodes in radiotherapy of patients with non-small cell lung cancer.

    PubMed

    Nestle, Ursula; Schaefer-Schuler, Andrea; Kremp, Stephanie; Groeschel, Andreas; Hellwig, Dirk; Rübe, Christian; Kirsch, Carl-Martin

    2007-04-01

    FDG PET is increasingly used in radiotherapy planning. Recently, we demonstrated substantial differences in target volumes when applying different methods of FDG-based contouring in primary lung tumours (Nestle et al., J Nucl Med 2005;46:1342-8). This paper focusses on FDG-positive mediastinal lymph nodes (LN(PET)). In our institution, 51 NSCLC patients who were candidates for radiotherapy prospectively underwent staging FDG PET followed by a thoracic PET scan in the treatment position and a planning CT. Eleven of them had 32 distinguishable non-confluent mediastinal or hilar nodal FDG accumulations (LN(PET)). For these, sets of gross tumour volumes (GTVs) were generated at both acquisition times by four different PET-based contouring methods (visual: GTV(vis); 40% SUVmax: GTV40; SUV=2.5: GTV2.5; target/background (T/B) algorithm: GTV(bg)). All differences concerning GTV sizes were within the range of the resolution of the PET system. The detectability and technical delineability of the GTVs were significantly better in the late scans (e.g. p = 0.02 for diagnostic application of SUVmax = 2.5; p = 0.0001 for technical delineability by GTV2.5; p = 0.003 by GTV40), favouring the GTV(bg) method owing to satisfactory overall applicability and independence of GTVs from acquisition time. Compared with CT, the majority of PET-based GTVs were larger, probably owing to resolution effects, with a possible influence of lesion movements. For nodal GTVs, different methods of contouring did not lead to clinically relevant differences in volumes. However, there were significant differences in technical delineability, especially after early acquisition. Overall, our data favour a late acquisition of FDG PET scans for radiotherapy planning, and the use of a T/B algorithm for GTV contouring.

  11. Relationship Between LIBS Ablation and Pit Volume for Geologic Samples: Applications for the In Situ Absolute Geochronology

    NASA Technical Reports Server (NTRS)

    Devismes, Damien; Cohen, Barbara; Miller, J.-S.; Gillot, P.-Y.; Lefevre, J.-C.; Boukari, C.

    2014-01-01

    These first results demonstrate that LIBS spectra can be an interesting tool to estimate the ablated volume. When the ablated volume is bigger than 9.10(exp 6) cubic micrometers, this method has less than 10% of uncertainties. Far enough to be directly implemented in the KArLE experiment protocol. Nevertheless, depending on the samples and their mean grain size, the difficulty to have homogeneous spectra will increase with the ablated volume. Several K-Ar dating studies based on this approach will be implemented. After that, the results will be shown and discussed.

  12. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    PubMed Central

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-01-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications. PMID:26883390

  13. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.

  14. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    PubMed Central

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-01-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399

  15. Statistical characterization of carbon phenolic prepreg materials, volume 1

    NASA Technical Reports Server (NTRS)

    Beckley, Don A.; Stites, John, Jr.

    1988-01-01

    The objective was to characterize several lots of materials used for carbon/carbon and carbon/phenol product manufacture. Volume one is organized into testing categories based on raw material of product form. Each category contains a discussion of the sampling plan, comments and observations on each test method utilized, and a summary of the results obtained each category.

  16. Volumetric analysis of pelvic hematomas after blunt trauma using semi-automated seeded region growing segmentation: a method validation study.

    PubMed

    Dreizin, David; Bodanapally, Uttam K; Neerchal, Nagaraj; Tirada, Nikki; Patlas, Michael; Herskovits, Edward

    2016-11-01

    Manually segmented traumatic pelvic hematoma volumes are strongly predictive of active bleeding at conventional angiography, but the method is time intensive, limiting its clinical applicability. We compared volumetric analysis using semi-automated region growing segmentation to manual segmentation and diameter-based size estimates in patients with pelvic hematomas after blunt pelvic trauma. A 14-patient cohort was selected in an anonymous randomized fashion from a dataset of patients with pelvic binders at MDCT, collected retrospectively as part of a HIPAA-compliant IRB-approved study from January 2008 to December 2013. To evaluate intermethod differences, one reader (R1) performed three volume measurements using the manual technique and three volume measurements using the semi-automated technique. To evaluate interobserver differences for semi-automated segmentation, a second reader (R2) performed three semi-automated measurements. One-way analysis of variance was used to compare differences in mean volumes. Time effort was also compared. Correlation between the two methods as well as two shorthand appraisals (greatest diameter, and the ABC/2 method for estimating ellipsoid volumes) was assessed with Spearman's rho (r). Intraobserver variability was lower for semi-automated compared to manual segmentation, with standard deviations ranging between ±5-32 mL and ±17-84 mL, respectively (p = 0.0003). There was no significant difference in mean volumes between the two readers' semi-automated measurements (p = 0.83); however, means were lower for the semi-automated compared with the manual technique (manual: mean and SD 309.6 ± 139 mL; R1 semi-auto: 229.6 ± 88.2 mL, p = 0.004; R2 semi-auto: 243.79 ± 99.7 mL, p = 0.021). Despite differences in means, the correlation between the two methods was very strong and highly significant (r = 0.91, p < 0.001). Correlations with diameter-based methods were only moderate and nonsignificant. Mean semi-automated segmentation time effort was 2 min and 6 s and 2 min and 35 s for R1 and R2, respectively, vs. 22 min and 8 s for manual segmentation. Semi-automated pelvic hematoma volumes correlate strongly with manually segmented volumes. Since semi-automated segmentation can be performed reliably and efficiently, volumetric analysis of traumatic pelvic hematomas is potentially valuable at the point-of-care.

  17. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    PubMed

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  18. Optimization of yttrium-90 PET for simultaneous PET/MR imaging: A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldib, Mootaz

    2016-08-15

    Purpose: Positron emission tomography (PET) imaging of yttrium-90 in the liver post radioembolization has been shown useful for personalized dosimetry calculations and evaluation of extrahepatic deposition. The purpose of this study was to quantify the benefits of several MR-based data correction approaches offered by using a combined PET/MR system to improve Y-90 PET imaging. In particular, the feasibility of motion and partial volume corrections were investigated in a controlled phantom study. Methods: The ACR phantom was filled with an initial concentration of 8 GBq of Y-90 solution resulting in a contrast of 10:1 between the hot cylinders and the background.more » Y-90 PET motion correction through motion estimates from MR navigators was evaluated by using a custom-built motion stage that simulated realistic amplitudes of respiration-induced liver motion. Finally, the feasibility of an MR-based partial volume correction method was evaluated using a wavelet decomposition approach. Results: Motion resulted in a large (∼40%) loss of contrast recovery for the 8 mm cylinder in the phantom, but was corrected for after MR-based motion correction was applied. Partial volume correction improved contrast recovery by 13% for the 8 mm cylinder. Conclusions: MR-based data correction improves Y-90 PET imaging on simultaneous PET/MR systems. Assessment of these methods must be studied further in the clinical setting.« less

  19. Retrospective Methods Analysis of Semiautomated Intracerebral Hemorrhage Volume Quantification From a Selection of the STICH II Cohort (Early Surgery Versus Initial Conservative Treatment in Patients With Spontaneous Supratentorial Lobar Intracerebral Haematomas).

    PubMed

    Haley, Mark D; Gregson, Barbara A; Mould, W Andrew; Hanley, Daniel F; Mendelow, Alexander David

    2018-02-01

    The ABC/2 method for calculating intracerebral hemorrhage (ICH) volume has been well validated. However, the formula, derived from the volume of an ellipse, assumes the shape of ICH is elliptical. We sought to compare the agreement of the ABC/2 formula with other methods through retrospective analysis of a selection of the STICH II cohort (Early Surgery Versus Initial Conservative Treatment in Patients With Spontaneous Supratentorial Lobar Intracerebral Haematomas). From 390 patients, 739 scans were selected from the STICH II image archive based on the availability of a CT scan compatible with OsiriX DICOM viewer. ICH volumes were calculated by the reference standard semiautomatic segmentation in OsiriX software and compared with calculated arithmetic methods (ABC/2, ABC/2.4, ABC/3, and 2/3SC) volumes. Volumes were compared by difference plots for specific groups: randomization ICH (n=374), 3- to 7-day postsurgical ICH (n=206), antithrombotic-associated ICH (n=79), irregular-shape ICH (n=703) and irregular-density ICH (n=650). Density and shape were measured by the Barras ordinal shape and density groups (1-5). The ABC/2.4 method had the closest agreement to the semiautomatic segmentation volume in all groups, except for the 3- to 7-day postsurgical ICH group where the ABC/3 method was superior. Although the ABC/2 formula for calculating elliptical ICH is well validated, it must be used with caution in ICH scans where the elliptical shape of ICH is a false assumption. We validated the adjustment of the ABC/2.4 method in randomization, antithrombotic-associated, heterogeneous-density, and irregular-shape ICH. URL: http://www.isrctn.com/ISRCTN22153967. Unique identifier: ISRCTN22153967. © 2018 American Heart Association, Inc.

  20. Method and apparatus for measuring the state of charge in a battery based on volume of battery components

    DOEpatents

    Rouhani, S. Zia

    1996-10-22

    The state of charge of electrochemical batteries of different kinds is determined by measuring the incremental change in the total volume of the reactive masses in the battery. The invention is based on the principle that all electrochemical batteries, either primary or secondary (rechargeable), produce electricity through a chemical reaction with at least one electrode, and the chemical reactions produce certain changes in the composition and density of the electrode. The reactive masses of the electrodes, the electrolyte, and any separator or spacers are usually contained inside a battery casing of a certain volume. As the battery is used, or recharged, the specific volume of at least one of the electrode masses will change and, since the masses of the materials do not change considerably, the total volume occupied by at least one of the electrodes will change. These volume changes may be measured in many different ways and related to the state of charge in the battery. In one embodiment, the volume change can be measured by monitoring the small changes in one of the principal dimensions of the battery casing as it expands or shrinks to accommodate the combined volumes of its components.

  1. Putaminal volume and diffusion in early familial Creutzfeldt-Jakob disease.

    PubMed

    Seror, Ilana; Lee, Hedok; Cohen, Oren S; Hoffmann, Chen; Prohovnik, Isak

    2010-01-15

    The putamen is centrally implicated in the pathophysiology of Creutzfeldt-Jakob Disease (CJD). To our knowledge, its volume has never been measured in this disease. We investigated whether gross putaminal atrophy can be detected by MRI in early stages, when the diffusion is already reduced. Twelve familial CJD patients with the E200K mutation and 22 healthy controls underwent structural and diffusion MRI scans. The putamen was identified in anatomical scans by two methods: manual tracing by a blinded investigator, and automatic parcellation by a computerized segmentation procedure (FSL FIRST). For each method, volume and mean Apparent Diffusion Coefficient (ADC) were calculated. ADC was significantly lower in CJD patients (697+/-64 microm(2)/s vs. 750+/-31 microm(2)/s, p<0.005), as expected, but the volume was not reduced. The computerized FIRST delineation yielded comparable ADC values to the manual method, but computerized volumes were smaller than manual tracing values. We conclude that significant diffusion reduction in the putamen can be detected by delineating the structure manually or with a computerized algorithm. Our findings confirm and extend previous voxel-based and observational studies. Putaminal volume was not reduced in our early-stage patients, thus confirming that diffusion abnormalities precede detectible atrophy in this structure.

  2. Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

    PubMed Central

    Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.

    2017-01-01

    Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296

  3. Acoustic measurement of bubble size in an inkjet printhead.

    PubMed

    Jeurissen, Roger; van der Bos, Arjan; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; de Jong, Jos; Versluis, Michel; Lohse, Detlef

    2009-11-01

    The volume of a bubble in a piezoinkjet printhead is measured acoustically. The method is based on a numerical model of the investigated system. The piezo not only drives the system but it is also used as a sensor by measuring the current it generates. The numerical model is used to predict this current for a given bubble volume. The inverse problem is to infer the bubble volume from an experimentally obtained piezocurrent. By solving this inverse problem, the size and position of the bubble can thus be measured acoustically. The method is experimentally validated with an inkjet printhead that is augmented with a glass connection channel, through which the bubble was observed optically, while at the same time the piezocurrent was measured. The results from the acoustical measurement method correspond closely to the results from the optical measurement.

  4. Real time control of a combined sewer system using radar-measured precipitation--results of the pilot study.

    PubMed

    Petruck, A; Holtmeier, E; Redder, A; Teichgräber, B

    2003-01-01

    Emschergenossenschaft and Lippeverband have developed a method to use radar-measured precipitation as an input for a real-time control of a combined sewer system containing several overflow structures. Two real-time control strategies have been developed and tested, one is solely volume-based, the other is volume and pollution-based. The system has been implemented in a pilot study in Gelsenkirchen, Germany. During the project the system was optimised and is now in constant operation. It was found, that the volume of combined sewage overflow could be reduced by 5 per cent per year. This was also found in simulations carried out in similar catchment areas. Most of the potential of improvement can already be achieved by local pollution-based control strategies.

  5. High Performance Computing Technologies for Modeling the Dynamics and Dispersion of Ice Chunks in the Arctic Ocean

    DTIC Science & Technology

    2016-08-23

    SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume

  6. Quest for a Realistic In Vivo Test Method for Antimicrobial Hand-Rub Agents: Introduction of a Low-Volume Hand Contamination Procedure▿

    PubMed Central

    Macinga, David R.; Beausoleil, Christopher M.; Campbell, Esther; Mulberry, Gayle; Brady, Ann; Edmonds, Sarah L.; Arbogast, James W.

    2011-01-01

    A novel method has been developed for the evaluation of alcohol-based hand rubs (ABHR) that employs a hand contamination procedure that more closely simulates the in-use conditions of ABHR. Hands of human subjects were contaminated with 0.2 ml of a concentrated suspension of Serratia marcescens (ATCC 14756) to achieve baseline contamination between 8 and 9 log10 CFU/hand while allowing product to be applied to dry hands with minimal soil load. Evaluation of 1.5 ml of an ABHR gel containing 62% ethanol produced log10 reductions of 2.66 ± 0.96, 2.40 ± 0.50, 2.41 ± 0.61, and 2.33 ± 0.49 (means ± standard deviations) after 1, 3, 7, and 10 successive contamination/product application cycles. In a study comparing this low-volume contamination (LVC) method to ASTM E1174, product dry times were more realistic and log10 reductions achieved by the ABHR were significantly greater when LVC was employed (P < 0.05). These results indicate that a novel low-volume hand contamination procedure, which more closely represents ABHR use conditions, provides more realistic estimates of in-use ABHR efficacies. Based on the LVC method, log10 reductions produced by ABHR were strongly dependent on the test product application volume (P < 0.0001) but were not influenced by the alcohol concentration when it was within the range of 62 to 85% (P = 0.378). PMID:22003004

  7. Non-invasive 3D time-of-flight imaging technique for tumour volume assessment in subcutaneous models.

    PubMed

    Delgado San Martin, J A; Worthington, P; Yates, J W T

    2015-04-01

    Subcutaneous tumour xenograft volumes are generally measured using callipers. This method is susceptible to inter- and intra-observer variability and systematic inaccuracies. Non-invasive 3D measurement using ultrasound and magnetic resonance imaging (MRI) have been considered, but require immobilization of the animal. An infrared-based 3D time-of-flight (3DToF) camera was used to acquire a depth map of tumour-bearing mice. A semi-automatic algorithm based on parametric surfaces was applied to estimate tumour volume. Four clay mouse models and 18 tumour-bearing mice were assessed using callipers (applying both prolate spheroid and ellipsoid models) and 3DToF methods, and validated using tumour weight. Inter-experimentalist variability could be up to 25% in the calliper method. Experimental results demonstrated good consistency and relatively low error rates for the 3DToF method, in contrast to biased overestimation using callipers. Accuracy is currently limited by camera performance; however, we anticipate the next generation 3DToF cameras will be able to support the development of a practical system. Here, we describe an initial proof of concept for a non-invasive, non-immobilized, morphology-independent, economical and potentially more precise tumour volume assessment technique. This affordable technique should maximize the datapoints per animal, by reducing the numbers required in experiments and reduce their distress. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Automatic Intra-Operative Stitching of Non-Overlapping Cone-Beam CT Acquisitions

    PubMed Central

    Fotouhi, Javad; Fuerst, Bernhard; Unberath, Mathias; Reichenstein, Stefan; Lee, Sing Chun; Johnson, Alex A.; Osgood, Greg M.; Armand, Mehran; Navab, Nassir

    2018-01-01

    Purpose Cone-Beam Computed Tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and non-overlapping CBCT volumes to enable 3D measurements on large anatomical structures. Methods A CBCT-capable mobile C-arm is augmented with a Red-Green-Blue-Depth (RGBD) camera. An off-line co-calibration of the two imaging modalities results in co-registered video, infrared, and X-ray views of the surgical scene. Then, automatic stitching of multiple small, non-overlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. Results On an animal cadaver, we show stitching errors as low as 0.33 mm, 0.91 mm, and 1.72mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. Conclusions The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures. PMID:29569728

  9. Evaluation of nonrigid registration models for interfraction dose accumulation in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janssens, Guillaume; Orban de Xivry, Jonathan; Fekkes, Stein

    2009-09-15

    Purpose: Interfraction dose accumulation is necessary to evaluate the dose distribution of an entire course of treatment by adding up multiple dose distributions of different treatment fractions. This accumulation of dose distributions is not straightforward as changes in the patient anatomy may occur during treatment. For this purpose, the accuracy of nonrigid registration methods is assessed for dose accumulation based on the calculated deformations fields. Methods: A phantom study using a deformable cubic silicon phantom with implanted markers and a cylindrical silicon phantom with MOSFET detectors has been performed. The phantoms were deformed and images were acquired using a cone-beammore » CT imager. Dose calculations were performed on these CT scans using the treatment planning system. Nonrigid CT-based registration was performed using two different methods, the Morphons and Demons. The resulting deformation field was applied on the dose distribution. For both phantoms, accuracy of the registered dose distribution was assessed. For the cylindrical phantom, also measured dose values in the deformed conditions were compared with the dose values of the registered dose distributions. Finally, interfraction dose accumulation for two treatment fractions of a patient with primary rectal cancer has been performed and evaluated using isodose lines and the dose volume histograms of the target volume and normal tissue. Results: A significant decrease in the difference in marker or MOSFET position was observed after nonrigid registration methods (p<0.001) for both phantoms and with both methods, as well as a significant decrease in the dose estimation error (p<0.01 for the cubic phantom and p<0.001 for the cylindrical) with both methods. Considering the whole data set at once, the difference between estimated and measured doses was also significantly decreased using registration (p<0.001 for both methods). The patient case showed a slightly underdosed planning target volume and an overdosed bladder volume due to anatomical deformations. Conclusions: Dose accumulation using nonrigid registration methods is possible using repeated CT imaging. This opens possibilities for interfraction dose accumulation and adaptive radiotherapy to incorporate possible differences in dose delivered to the target volume and organs at risk due to anatomical deformations.« less

  10. Annual Conference on HAN-Based Liquid Propellants. Volume 1

    DTIC Science & Technology

    1989-05-01

    Fischer . This situation is obviously not ideal and effort is being made to find a suitable method . However we have been assured that there has been...CLASSIFICATION OF HAN-BASED LIQUID PROPELLANT LP101. S. Westlake --..---- ------------ 64 POSSIBLE TEST METHODS TO STUDY THE THERMAL STABILITY OF...specifications for LP. The phase of the program which is now in progress has dealt with (1) reviewing. recommending and developing applicable analytical methods

  11. Multi-temporal MRI carpal bone volumes analysis by principal axes registration

    NASA Astrophysics Data System (ADS)

    Ferretti, Roberta; Dellepiane, Silvana

    2016-03-01

    In this paper, a principal axes registration technique is presented, with the relevant application to segmented volumes. The purpose of the proposed registration is to compare multi-temporal volumes of carpal bones from Magnetic Resonance Imaging (MRI) acquisitions. Starting from the study of the second-order moment matrix, the eigenvectors are calculated to allow the rotation of volumes with respect to reference axes. Then the volumes are spatially translated to become perfectly overlapped. A quantitative evaluation of the results obtained is carried out by computing classical indices from the confusion matrix, which depict similarity measures between the volumes of the same organ as extracted from MRI acquisitions executed at different moments. Within the medical field, the way a registration can be used to compare multi-temporal images is of great interest, since it provides the physician with a tool which allows a visual monitoring of a disease evolution. The segmentation method used herein is based on the graph theory and is a robust, unsupervised and parameters independent method. Patients affected by rheumatic diseases have been considered.

  12. Rapid jetting status inspection and accurate droplet volume measurement for a piezo drop-on-demand inkjet print head using a scanning mirror for display applications

    NASA Astrophysics Data System (ADS)

    Shin, Dong-Youn; Kim, Minsung

    2017-02-01

    Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μ m for the fictitious droplets of 50 μ m in diameter and -1.2 ± 0.3 μ m for the fictitious droplets of 30 μ m in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μ m . When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μ m at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.

  13. Rapid jetting status inspection and accurate droplet volume measurement for a piezo drop-on-demand inkjet print head using a scanning mirror for display applications.

    PubMed

    Shin, Dong-Youn; Kim, Minsung

    2017-02-01

    Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μm for the fictitious droplets of 50 μm in diameter and -1.2 ± 0.3 μm for the fictitious droplets of 30 μm in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μm. When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μm at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.

  14. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  15. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  16. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  17. Developing and Evaluating Prototype of Waste Volume Monitoring Using Internet of Things

    NASA Astrophysics Data System (ADS)

    Fathhan Arief, Mohamad; Lumban Gaol, Ford

    2017-06-01

    In Indonesia, especially Jakarta have a lot of garbage strewn that can be an eyesore and also cause pollution that can carry diseases. Garbage strewn can cause many things, one of her dues is bins are overflowing due to the full so it can not accommodate the waste dumped from other people. Thus, the author created a new method for waste disposal more systematic. In creating new method requires a technology to supports, then the author makes a prototype for waste volume monitoring. By using the internet of things prototype of waste volume monitoring may give notification to the sanitary agency that waste in the trash bin needs to be disposal. In this study, conducted the design and manufactured of prototype waste volume monitoring using LinkItONE board based by Arduino and an ultrasonic sensor for appliance senses. Once the prototype is completed, evaluation in order to determine whether the prototype will function properly. The result showed that the expected function of a prototype waste volume monitoring can work well.

  18. Volume Measurements of Laser-generated Pits for In Situ Geochronology using KArLE (Potassium-Argon Laser Experiment)

    NASA Technical Reports Server (NTRS)

    French, R. A.; Cohen, B. A.; Miller, J. S.

    2014-01-01

    The Potassium-Argon Laser Experiment( KArLE), is composed of two main instruments: a spectrometer as part of the Laser-Induced Breakdown Spectroscopy (LIBS) method and a Mass Spectrometer (MS). The LIBS laser ablates a sample and creates a plasma cloud, generating a pit in the sample. The LIBS plasma is measured for K abundance in weight percent and the released gas is measured using the MS, which calculates Ar abundance in mols. To relate the K and Ar measurements, total mass of the ablated sample is needed but can be difficult to directly measure. Instead, density and volume are used to calculate mass, where density is calculated based on the elemental composition of the rock (from the emission spectrum) and volume is determined by pit morphology. This study aims to reduce the uncertainty for KArLE by analyzing pit volume relationships in several analog materials and comparing methods of pit volume measurements and their associated uncertainties.

  19. Automation of CT-based haemorrhagic stroke assessment for improved clinical outcomes: study protocol and design.

    PubMed

    Chinda, Betty; Medvedev, George; Siu, William; Ester, Martin; Arab, Ali; Gu, Tao; Moreno, Sylvain; D'Arcy, Ryan C N; Song, Xiaowei

    2018-04-19

    Haemorrhagic stroke is of significant healthcare concern due to its association with high mortality and lasting impact on the survivors' quality of life. Treatment decisions and clinical outcomes depend strongly on the size, spread and location of the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while research-based approaches are yet to be tested for clinical utility; there is a demonstrated need for developing effective solutions. The project under review investigates the development of an automatic NCCT-based haematoma computation tool in support of accurate quantification of haematoma volumes. Several existing research methods for haematoma volume estimation are studied. Selected methods are tested using NCCT images of patients diagnosed with acute haemorrhagic stroke. For inter-rater and intrarater reliability evaluation, different raters will analyse haemorrhage volumes independently. The efficiency with respect to time of haematoma volume assessments will be examined to compare with the results from routine clinical evaluations and planimetry assessment that are known to be more accurate. The project will target the development of an enhanced solution by adapting existing methods and integrating machine learning algorithms. NCCT-based information of brain haemorrhage (eg, size, volume, location) and other relevant information (eg, age, sex, risk factor, comorbidities) will be used in relation to clinical outcomes with future project development. Validity and reliability of the solution will be examined for potential clinical utility. The project including procedures for deidentification of NCCT data has been ethically approved. The study involves secondary use of existing data and does not require new consent of participation. The team consists of clinical neuroimaging scientists, computing scientists and clinical professionals in neurology and neuroradiology and includes patient representatives. Research outputs will be disseminated following knowledge translation plans towards improving stroke patient care. Significant findings will be published in scientific journals. Anticipated deliverables include computer solutions for improved clinical assessment of haematoma using NCCT. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  1. Development of computerized stocktaking system in mine surveying for ore mineral volume calculation in covered storehouses

    NASA Astrophysics Data System (ADS)

    Valdman, V. V.; Gridnev, S. O.

    2017-10-01

    The article examines into the vital issues of measuring and calculating the raw stock volumes in covered storehouses at mining and processing plants. The authors bring out two state-of-the-art high-technology solutions: 1 - to use the ground-based laser scanning system (the method is reasonably accurate and dependable, but costly and time consuming; it also requires the stoppage of works in the storehouse); 2 - to use the fundamentally new computerized stocktaking system in mine surveying for the ore mineral volume calculation, based on the profile digital images. These images are obtained via vertical projection of the laser plane onto the surface of the stored raw materials.

  2. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.

  3. Anthropometric approaches and their uncertainties to assigning computational phantoms to individual patients in pediatric dosimetry studies

    NASA Astrophysics Data System (ADS)

    Whalen, Scott; Lee, Choonsik; Williams, Jonathan L.; Bolch, Wesley E.

    2008-01-01

    Current efforts to reconstruct organ doses in children undergoing diagnostic imaging or therapeutic interventions using ionizing radiation typically rely upon the use of reference anthropomorphic computational phantoms coupled to Monte Carlo radiation transport codes. These phantoms are generally matched to individual patients based upon nearest age or sometimes total body mass. In this study, we explore alternative methods of phantom-to-patient matching with the goal of identifying those methods which yield the lowest residual errors in internal organ volumes. Various thoracic and abdominal organs were segmented and organ volumes obtained from chest-abdominal-pelvic (CAP) computed tomography (CT) image sets from 38 pediatric patients ranging in age from 2 months to 15 years. The organs segmented included the skeleton, heart, kidneys, liver, lungs and spleen. For each organ, least-squared regression lines, 95th percentile confidence intervals and 95th percentile prediction intervals were established as a function of patient age, trunk volume, estimated trunk mass, trunk height, and three estimates of the ventral body cavity volume based on trunk height alone, or in combination with circumferential, width and/or breadth measurements in the mid-chest of the patient. When matching phantom to patient based upon age, residual uncertainties in organ volumes ranged from 53% (lungs) to 33% (kidneys), and when trunk mass was used (surrogate for total body mass as we did not have images of patient head, arms or legs), these uncertainties ranged from 56% (spleen) to 32% (liver). When trunk height is used as the matching parameter, residual uncertainties in organ volumes were reduced to between 21 and 29% for all organs except the spleen (40%). In the case of the lungs and skeleton, the two-fold reduction in organ volume uncertainties was seen in moving from patient age to trunk height—a parameter easily measured in the clinic. When ventral body cavity volumes were used, residual uncertainties were lowered even further to a range of between 14 and 20% for all organs except the spleen, which continued to remain at around 40%. The results of this study suggest that a more anthropometric pairing of computational phantom to individual patient based on simple measurements of trunk height and possibly mid-chest circumference or thickness (where influences of subcutaneous fat are minimized) can lead to significant reductions in organ volume uncertainties: ranges of 40-50% (based on patient age) to between 15 and 20% (based on body cavity volumes tied to trunk height). An expanded series of non-uniform rational B-spine (NURBS) pediatric phantoms are being created at the University of Florida to allow the full application of this new approach in pediatric medical imaging studies.

  4. CRENAME, A Molecular Microbiology Method Enabling Multiparametric Assessment of Potable/Drinking Water.

    PubMed

    Bissonnette, Luc; Maheux, Andrée F; Bergeron, Michel G

    2017-01-01

    The microbial assessment of potable/drinking water is done to ensure that the resource is free of fecal contamination indicators or waterborne pathogens. Culture-based methods for verifying the microbial safety are limited in the sense that a standard volume of water is generally tested for only one indicator (family) or pathogen.In this work, we describe a membrane filtration-based molecular microbiology method, CRENAME (Concentration Recovery Extraction of Nucleic Acids and Molecular Enrichment), exploiting molecular enrichment by whole genome amplification (WGA) to yield, in less than 4 h, a nucleic acid preparation which can be repetitively tested by real-time PCR for example, to provide multiparametric presence/absence tests (1 colony forming unit or microbial particle per standard volume of 100-1000 mL) for bacterial or protozoan parasite cells or particles susceptible to contaminate potable/drinking water.

  5. Method and Mchievement of Survey and Evaluation of Groundwater Resources of Guangzhou City

    NASA Astrophysics Data System (ADS)

    Lin, J.

    2017-12-01

    Based on the documents and achievements relevant to hydrogeological surveying and mapping of 1:100000, hydrogeological drilling, pumping test and dynamic monitoring of groundwater level in Guangzhou, considering the hydrogeological conditions of Guangzhou and combining the advanced technologies such as remote sensing, the survey and evaluation of the volume of the groundwater resources of Guangzhou was carried out in plain and mountain areas separately. The recharge method was used to evaluate the volume of groundwater resources in plain areas, meanwhile, the output volume and the storage change volume of groundwater were calculated and the volume of groundwater resources was corrected by water balance analysis; while the discharge method was used to evaluated the volume of groundwater resources in mountain areas. The result of survey and evaluation indicates that: the volume of the natural groundwater resources in Guangzhou City is 1.83 billion m3 of which the groundwater replenishment quantity in plain areas is 510,045,000 m3, with a total output of 509,729,000 m3, an absolute balance difference of 316,000 m3 and a relative balance difference of 0.062%; the volume of groundwater resources in mountain areas is 1,358,208,000 m3 including the river basic flow is 965,054,000 m3; the repetitive counted volume of groundwater resources in both plain areas and mountain areas is 38,839,000 m3. This work was realized by refined means for the first time to entirely find out the volume of groundwater resources of Guangzhou City and the law of their distribution so as to lay an important foundation for the protection and reasonable development and exploration of the groundwater resources of Guangzhou City.

  6. Estimation of error in maximal intensity projection-based internal target volume of lung tumors: a simulation and comparison study using dynamic magnetic resonance imaging.

    PubMed

    Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke

    2007-11-01

    To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.

  7. Effectiveness and efficacy of minimally invasive lung volume reduction surgery for emphysema

    PubMed Central

    Pertl, Daniela; Eisenmann, Alexander; Holzer, Ulrike; Renner, Anna-Theresa; Valipour, A.

    2014-01-01

    Lung emphysema is a chronic, progressive and irreversible destruction of the lung tissue. Besides non-medical therapies and the well established medical treatment there are surgical and minimally invasive methods for lung volume reduction (LVR) to treat severe emphysema. This report deals with the effectiveness and cost-effectiveness of minimally invasive methods compared to other treatments for LVR in patients with lung emphysema. Furthermore, legal and ethical aspects are discussed. No clear benefit of minimally invasive methods compared to surgical methods can be demonstrated based on the identified and included evidence. In order to assess the different methods for LVR regarding their relative effectiveness and safety in patients with lung emphysema direct comparative studies are necessary. PMID:25295123

  8. Effectiveness and efficacy of minimally invasive lung volume reduction surgery for emphysema.

    PubMed

    Pertl, Daniela; Eisenmann, Alexander; Holzer, Ulrike; Renner, Anna-Theresa; Valipour, A

    2014-01-01

    Lung emphysema is a chronic, progressive and irreversible destruction of the lung tissue. Besides non-medical therapies and the well established medical treatment there are surgical and minimally invasive methods for lung volume reduction (LVR) to treat severe emphysema. This report deals with the effectiveness and cost-effectiveness of minimally invasive methods compared to other treatments for LVR in patients with lung emphysema. Furthermore, legal and ethical aspects are discussed. No clear benefit of minimally invasive methods compared to surgical methods can be demonstrated based on the identified and included evidence. In order to assess the different methods for LVR regarding their relative effectiveness and safety in patients with lung emphysema direct comparative studies are necessary.

  9. Vestibular schwannomas: Accuracy of tumor volume estimated by ice cream cone formula using thin-sliced MR images.

    PubMed

    Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung

    2018-01-01

    We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.

  10. SU-E-T-287: Dose Verification On the Variation of Target Volume and Organ at Risk in Preradiation Chemotherapy IMRT for Nasopharyngeal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X; Kong, L; Wang, J

    2015-06-15

    Purpose: To quantify the target volume and organ at risk of nasopharyngeal carcinoma (NPC) patients with preradiation chemotherapy based on CT scanned during intensity-modulated radiotherapy (IMRT), and recalculate the dose distribution. Methods: Seven patients with NPC and preradiation chemotherapy, treated with IMRT (35 to 37 fractions) were reviewed. Repeat CT scanning was required to all of the patients during the radiotherapy, and the number of repeat CTs varies from 2 to 6. The plan CT and repeat CT were generated by different CT scanner. To ensure crespectively on the same IMPT plan. The real dose distribution was calculated by deformablemore » registration and weighted method in Raystation (v 4.5.1). The fraction of each dose is based on radiotherapy record. The volumetric and dose differences among these images were calculated for nascIpharyngeal tumor and retro-pharyngeal lymph nodes (GTV-NX), neck lymph nodes(GTV-ND), and parotid glands. Results: The volume variation in GTV-NX from CT1 to CT2 was 1.15±3.79%, and in GTV-LN −0.23±4.93%. The volume variation in left parotid from CT1 to CT2 was −6.79±11.91%, and in right parotid −3.92±8.80%. In patient 2, the left parotid volume were decreased remarkably, as a Result, the V30 and V40 of it were increased as well. Conclusion: The target volume of patients with NPC varied lightly during IMRT. It shows that preradiation chemotherapy can control the target volume variation and perform a good dose repeatability. Also, the decreasing volume of parotid in some patient might increase the dose of it, which might course potential complications.« less

  11. The potential advantages of (18)FDG PET/CT-based target volume delineation in radiotherapy planning of head and neck cancer.

    PubMed

    Moule, Russell N; Kayani, Irfan; Moinuddin, Syed A; Meer, Khalda; Lemon, Catherine; Goodchild, Kathleen; Saunders, Michele I

    2010-11-01

    This study investigated two fixed threshold methods to delineate the target volume using (18)FDG PET/CT before and during a course of radical radiotherapy in locally advanced squamous cell carcinoma of the head and neck. Patients were enrolled into the study between March 2006 and May 2008. (18)FDG PET/CT scans were carried out 72h prior to the start of radiotherapy and then at 10, 44 and 66Gy. Functional volumes were delineated according to the SUV Cut Off (SUVCO) (2.5, 3.0, 3.5, and 4.0bwg/ml) and percentage of the SUVmax (30%, 35%, 40%, 45%, and 50%) thresholds. The background (18)FDG uptake and the SUVmax within the volumes were also assessed. Primary and lymph node volumes for the eight patients significantly reduced with each increase in the delineation threshold (for example 2.5-3.0bwg/ml SUVCO) compared to the baseline threshold at each imaging point. There was a significant reduction in the volume (p⩽0.0001-0.01) after 36Gy compared to the 0Gy by the SUVCO method. There was a negative correlation between the SUVmax within the primary and lymph node volumes and delivered radiation dose (p⩽0.0001-0.011) but no difference in the SUV within the background reference region. The volumes delineated by the PTSUVmax method increased with the increase in the delivered radiation dose after 36Gy because the SUVmax within the region of interest used to define the edge of the volume was equal or less than the background (18)FDG uptake and the software was unable to effectively differentiate between tumour and background uptake. The changes in the target volumes delineated by the SUVCO method were less susceptible to background (18)FDG uptake compared to those delineated by the PTSUVmax and may be more helpful in radiotherapy planning. The best method and threshold have still to be determined within institutions, both nationally and internationally. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Greater intake of vitamins B6 and B12 spares gray matter in healthy elderly: a voxel-based morphometry study

    PubMed Central

    Erickson, Kirk I.; Suever, Barbara L.; Shaurya Prakash, Ruchika; Colcombe, Stanley J.; McAuley, Edward; Kramer, Arthur F.

    2008-01-01

    Previous studies have reported that high concentrations of homocysteine and lower concentrations of vitamin B6, B12, and folate increase the risk for cognitive decline and pathology in aging populations. In this cross-sectional study, high-resolution magnetic resonance imaging (MRI) scans and a 3-day food diary were collected on 32 community-dwelling adults between the ages of 59 and 79. We examined the relation between vitamin B6, B12, and folate intake on cortical volume using an optimized voxel-based morphometry (VBM) method and global gray and white matter volume after correcting for age, sex, body mass index, calorie intake, and education. All participants met or surpassed the recommended daily intake for these vitamins. In the VBM analysis, we found that adults with greater vitamin B6 intake had greater gray matter volume along the medial wall, anterior cingulate cortex, medial parietal cortex, middle temporal gyrus, and superior frontal gyrus, whereas people with greater B12 intake had greater volume in the left and right superior parietal sulcus. These effects were driven by vitamin supplementation and were negated when only examining vitamin intake from diet. Folate had no effect on brain volume. Furthermore, there was no relationship between vitamin B6, B12, or folate intake on global brain volume measures, indicating that VBM methods are more sensitive for detecting localized differences in gray matter volume than global measures. These results are discussed in relation to a growing literature on vitamin intake on age-related neurocognitive deterioration. PMID:18281020

  13. A quality control method for intensity-modulated radiation therapy planning based on generalized equivalent uniform dose.

    PubMed

    Pang, Haowen; Sun, Xiaoyang; Yang, Bo; Wu, Jingbo

    2018-05-01

    To ensure good quality intensity-modulated radiation therapy (IMRT) planning, we proposed the use of a quality control method based on generalized equivalent uniform dose (gEUD) that predicts absorbed radiation doses in organs at risk (OAR). We conducted a retrospective analysis of patients who underwent IMRT for the treatment of cervical carcinoma, nasopharyngeal carcinoma (NPC), or non-small cell lung cancer (NSCLC). IMRT plans were randomly divided into data acquisition and data verification groups. OAR in the data acquisition group for cervical carcinoma and NPC were further classified as sub-organs at risk (sOAR). The normalized volume of sOAR and normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula. For NSCLC, the normalized intersection volume of the planning target volume (PTV) and lung, the maximum diameter of the PTV (left-right, anterior-posterior, and superior-inferior), and the normalized gEUD (a = 1) were analyzed using multiple linear regression to establish a fitting formula for the lung gEUD (a = 1). The r-squared and P values indicated that the fitting formula was a good fit. In the data verification group, IMRT plans verified the accuracy of the fitting formula, and compared the gEUD (a = 1) for each OAR between the subjective method and the gEUD-based method. In conclusion, the gEUD-based method can be used effectively for quality control and can reduce the influence of subjective factors on IMRT planning optimization. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Germanium and Tin Based Anode Materials for Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Ji, Dongsheng

    The discovery of safe anode materials with high energy density for lithium-ion batteries has always been a significant topic. Group IV elements have been under intensive study for their high capability of alloying with lithium. Batteries with graphite and tin based anode material have already been applied in cell phones and vehicles. In order to apply group IV elements, their dramatic volume change during lithiation and delithiation processes is the key point to work on. Reducing the particle size is the most common method to buffer the volume expansion. This strategy has been applied on both germanium and tin based materials. Germanium based anode material has been made by two different synthesis methods. The amorphous Ge-C-Ti composite material was made by ball milling method and performed much better than other germanium alloy including Ge-Mg, Ge-Fe and Ge-Fe.Germanium sphere nano particles with diameter of around 50 nm have been made by solution method. After ball milled with graphite, the resulted product performed stable capacity over 500 mAh˙g-1 for more than 20 cycles. Ball milled graphite in the composite plays an important role of buffering volume change and stabilizing germanium. Sn-Fe alloy is one of the feasible solutions to stabilize tin. Sn 2Fe-C composite has been made by ball milling method. After optimizations of the ratio of precursors, reaction time, milling balls and electrolyte additives, the electrochemistry performance was improved. The anode performed 420 mAh˙ -1 at 1.0 mA/cm2 and maintained its structure after cycling at 2.0 mA/cm2. At 0.3 mA/cm2 cycling rate, the anode performed 978 mAh/cm3 after 500 cycles, which still exceeds the theoretical capacity of graphite.

  15. Associations between Verbal Learning Slope and Neuroimaging Markers across the Cognitive Aging Spectrum.

    PubMed

    Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L

    2015-07-01

    A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.

  16. Adult Bronchoscopy Training

    PubMed Central

    Wahidi, Momen M.; Read, Charles A.; Buckley, John D.; Addrizzo-Harris, Doreen J.; Shah, Pallav L.; Herth, Felix J. F.; de Hoyos Parra, Alberto; Ornelas, Joseph; Yarmus, Lonny; Silvestri, Gerard A.

    2015-01-01

    BACKGROUND: The determination of competency of trainees in programs performing bronchoscopy is quite variable. Some programs provide didactic lectures with hands-on supervision, other programs incorporate advanced simulation centers, whereas others have a checklist approach. Although no single method has been proven best, the variability alone suggests that outcomes are variable. Program directors and certifying bodies need guidance to create standards for training programs. Little well-developed literature on the topic exists. METHODS: To provide credible and trustworthy guidance, rigorous methodology has been applied to create this bronchoscopy consensus training statement. All panelists were vetted and approved by the CHEST Guidelines Oversight Committee. Each topic group drafted questions in a PICO (population, intervention, comparator, outcome) format. MEDLINE data through PubMed and the Cochrane Library were systematically searched. Manual searches also supplemented the searches. All gathered references were screened for consideration based on inclusion criteria, and all statements were designated as an Ungraded Consensus-Based Statement. RESULTS: We suggest that professional societies move from a volume-based certification system to skill acquisition and knowledge-based competency assessment for trainees. Bronchoscopy training programs should incorporate multiple tools, including simulation. We suggest that ongoing quality and process improvement systems be introduced and that certifying agencies move from a volume-based certification system to skill acquisition and knowledge-based competency assessment for trainees. We also suggest that assessment of skill maintenance and improvement in practice be evaluated regularly with ongoing quality and process improvement systems after initial skill acquisition. CONCLUSIONS: The current methods used for bronchoscopy competency in training programs are variable. We suggest that professional societies and certifying agencies move from a volume- based certification system to a standardized skill acquisition and knowledge-based competency assessment for pulmonary and thoracic surgery trainees. PMID:25674901

  17. Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.

    PubMed

    von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar

    2017-09-01

    Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.

  18. Analysis on Vertical Scattering Signatures in Forestry with PolInSAR

    NASA Astrophysics Data System (ADS)

    Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen

    2014-11-01

    We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.

  19. Novel Approach to Estimate Kidney and Cyst Volumes using Mid-Slice Magnetic Resonance Images in Polycystic Kidney Disease

    PubMed Central

    Bae, Kyongtae T; Tao, Cheng; Wang, Jinhong; Kaya, Diana; Wu, Zhiyuan; Bae, Junu T; Chapman, Arlene B; Torres, Vicente E; Grantham, Jared J; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P

    2013-01-01

    Objective To evaluate whether kidney and cyst volumes can be accurately estimated based on limited area measurements from MR images of patients with autosomal dominant polycystic kidney disease (ADPKD). Materials and Methods MR coronal images of 178 ADPKD participants from the Consortium for Radiologic Imaging Studies of ADPKD (CRISP) were analyzed. For each MR image slice, we measured kidney and renal cyst areas using stereology and region-based thresholding methods, respectively. The kidney and cyst ‘observed’ volumes were calculated by summing up the area measurements of all the slices covering the kidney. To estimate the volume, we selected a coronal mid-slice in each kidney and multiplied its area by the total number of slices (‘PANK2’ for kidney and ‘PANC2’ for cyst). We then compared the kidney and cyst volumes predicted from PANK2 and PANC2, respectively, to the corresponding observed volumes, using a linear regression analysis. Results The kidney volume predicted from PANK2 correlated extremely well with the observed kidney volume: R2=0.994 for right and 0.991 for left kidney. The linear regression coefficient multiplier to PANK2 that best fit the kidney volume was 0.637 (95%CI: 0.629–0.644) for right and 0.624 (95%CI: 0.616–0.633) for left kidney. The correlation between the cyst volume predicted from PANC2 and the observed cyst volume was also very high: R2=0.984 for right and 0.967 for left kidney. The least squares linear regression coefficient for PANC2 was 0.637 (95%CI: 0.624–0.649) for right and 0.608 (95%CI: 0.591–0.625) for left kidney. Conclusion Kidney and cyst volumes can be closely approximated by multiplying the product of the mid-slice area measurement and the total number of slices in the coronal MR images of ADPKD kidneys by 0.61–0.64. This information will help save processing time needed to estimate total kidney and cyst volumes of ADPKD kidneys. PMID:24107679

  20. MR-assisted PET motion correction in simultaneous PET/MRI studies of dementia subjects.

    PubMed

    Chen, Kevin T; Salcedo, Stephanie; Chonde, Daniel B; Izquierdo-Garcia, David; Levine, Michael A; Price, Julie C; Dickerson, Bradford C; Catana, Ciprian

    2018-03-08

    Subject motion in positron emission tomography (PET) studies leads to image blurring and artifacts; simultaneously acquired magnetic resonance imaging (MRI) data provides a means for motion correction (MC) in integrated PET/MRI scanners. To assess the effect of realistic head motion and MR-based MC on static [ 18 F]-fluorodeoxyglucose (FDG) PET images in dementia patients. Observational study. Thirty dementia subjects were recruited. 3T hybrid PET/MR scanner where EPI-based and T 1 -weighted sequences were acquired simultaneously with the PET data. Head motion parameters estimated from high temporal resolution MR volumes were used for PET MC. The MR-based MC method was compared to PET frame-based MC methods in which motion parameters were estimated by coregistering 5-minute frames before and after accounting for the attenuation-emission mismatch. The relative changes in standardized uptake value ratios (SUVRs) between the PET volumes processed with the various MC methods, without MC, and the PET volumes with simulated motion were compared in relevant brain regions. The absolute value of the regional SUVR relative change was assessed with pairwise paired t-tests testing at the P = 0.05 level, comparing the values obtained through different MR-based MC processing methods as well as across different motion groups. The intraregion voxelwise variability of regional SUVRs obtained through different MR-based MC processing methods was also assessed with pairwise paired t-tests testing at the P = 0.05 level. MC had a greater impact on PET data quantification in subjects with larger amplitude motion (higher than 18% in the medial orbitofrontal cortex) and greater changes were generally observed for the MR-based MC method compared to the frame-based methods. Furthermore, a mean relative change of ∼4% was observed after MC even at the group level, suggesting the importance of routinely applying this correction. The intraregion voxelwise variability of regional SUVRs was also decreased using MR-based MC. All comparisons were significant at the P = 0.05 level. Incorporating temporally correlated MR data to account for intraframe motion has a positive impact on the FDG PET image quality and data quantification in dementia patients. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  2. Using high-order polynomial basis in 3-D EM forward modeling based on volume integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Kuvshinov, Alexey

    2018-05-01

    3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.

  3. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    PubMed

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  4. Method modification of the Legipid® Legionella fast detection test kit.

    PubMed

    Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez

    2014-01-01

    Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.

  5. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    NASA Astrophysics Data System (ADS)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  6. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data.

    PubMed

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-21

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  7. Theoretical distribution of gutta-percha within root canals filled using cold lateral compaction based on numeric calculus.

    PubMed

    Min, Yi; Song, Ying; Gao, Yuan; Dummer, Paul M H

    2016-08-01

    This study aimed to present a new method based on numeric calculus to provide data on the theoretical volume ratio of voids when using the cold lateral compaction technique in canals with various diameters and tapers. Twenty-one simulated mathematical root canal models were created with different tapers and sizes of apical diameter, and were filled with defined sizes of standardized accessory gutta-percha cones. The areas of each master and accessory gutta-percha cone as well as the depth of their insertion into the canals were determined mathematically in Microsoft Excel. When the first accessory gutta-percha cone had been positioned, the residual area of void was measured. The areas of the residual voids were then measured repeatedly upon insertion of additional accessary cones until no more could be inserted in the canal. The volume ratio of voids was calculated through measurement of the volume of the root canal and mass of gutta-percha cones. The theoretical volume ratio of voids was influenced by the taper of canal, the size of apical preparation and the size of accessory gutta-percha cones. Greater apical preparation size and larger taper together with the use of smaller accessory cones reduced the volume ratio of voids in the apical third. The mathematical model provided a precise method to determine the theoretical volume ratio of voids in root-filled canals when using cold lateral compaction.

  8. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context.

    PubMed

    Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-01-01

    To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.

  9. Essential Mathematics for the Physical Sciences; Volume I: Homogeneous boundary value problems, Fourier methods, and special functions

    NASA Astrophysics Data System (ADS)

    Borden, Brett; Luscombe, James

    2017-10-01

    Physics is expressed in the language of mathematics; it is deeply ingrained in how physics is taught and how it's practiced. A study of the mathematics used in science is thus a sound intellectual investment for training as scientists and engineers. This first volume of two is centered on methods of solving partial differential equations and the special functions introduced. This text is based on a course offered at the Naval Postgraduate School (NPS) and while produced for NPS needs, it will serve other universities well.

  10. Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1994-01-01

    The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.

  11. A replacement for islet equivalents with improved reliability and validity.

    PubMed

    Huang, Han-Hung; Ramachandran, Karthik; Stehno-Bittel, Lisa

    2013-10-01

    Islet equivalent (IE), the standard estimate of isolated islet volume, is an essential measure to determine the amount of transplanted islet tissue in the clinic and is used in research laboratories to normalize results, yet it is based on the false assumption that all islets are spherical. Here, we developed and tested a new easy-to-use method to quantify islet volume with greater accuracy. Isolated rat islets were dissociated into single cells, and the total cell number per islet was determined by using computer-assisted cytometry. Based on the cell number per islet, we created a regression model to convert islet diameter to cell number with a high R2 value (0.8) and good validity and reliability with the same model applicable to young and old rats and males or females. Conventional IE measurements overestimated the tissue volume of islets. To compare results obtained using IE or our new method, we compared Glut2 protein levels determined by Western Blot and proinsulin content via ELISA between small (diameter≤100 μm) and large (diameter≥200 μm) islets. When normalized by IE, large islets showed significantly lower Glut2 level and proinsulin content. However, when normalized by cell number, large and small islets had no difference in Glut2 levels, but large islets contained more proinsulin. In conclusion, normalizing islet volume by IE overestimated the tissue volume, which may lead to erroneous results. Normalizing by cell number is a more accurate method to quantify tissue amounts used in islet transplantation and research.

  12. Plexiform neurofibroma tissue classification

    NASA Astrophysics Data System (ADS)

    Weizman, L.; Hoch, L.; Ben Sira, L.; Joskowicz, L.; Pratt, L.; Constantini, S.; Ben Bashat, D.

    2011-03-01

    Plexiform Neurofibroma (PN) is a major complication of NeuroFibromatosis-1 (NF1), a common genetic disease that involving the nervous system. PNs are peripheral nerve sheath tumors extending along the length of the nerve in various parts of the body. Treatment decision is based on tumor volume assessment using MRI, which is currently time consuming and error prone, with limited semi-automatic segmentation support. We present in this paper a new method for the segmentation and tumor mass quantification of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically detects the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets yield a mean volume overlap difference of 25% as compared to manual segmentation by expert radiologist with a mean computation and interaction time of 12 minutes vs. over an hour for manual annotation. Since the user interaction in the segmentation process is minimal, our method has the potential to successfully become part of the clinical workflow.

  13. Caries-removal effectiveness of a papain-based chemo-mechanical agent: A quantitative micro-CT study.

    PubMed

    Neves, Aline A; Lourenço, Roseane A; Alves, Haimon D; Lopes, Ricardo T; Primo, Laura G

    2015-01-01

    The aim of this study was to access the effectiveness and specificity of a papain-based chemo-mechanical caries-removal agent in providing minimum residual caries after cavity preparation. In order to do it, extracted carious molars were selected and scanned in a micro-CT before and after caries-removal procedures with the papain-based gel. Similar parameters for acquisition and reconstruction of the image stacks were used between the scans. After classification of the dentin substrate based on mineral density intervals and establishment of a carious tissue threshold, volumetric parameters related to effectiveness (mineral density of removed dentin volume and residual dentin tissue) and specificity (relation between carious dentin in removed volume and initial caries) of this caries-removal agent were obtained. In general, removed dentin volume was similar or higher than the initial carious volume, indicating that the method was able to effectively remove dentin tissue. Samples with an almost perfect accuracy in carious dentin removal also showed an increased removal of caries-affected tissue. On the contrary, less or no affected dentin was removed in samples where some carious tissue was left in residual dentin. Mineral density values in residual dentin were always higher or similar to the threshold for mineral density values in carious dentin. In conclusion, the papain-based gel was effective in removing carious dentin up to a conservative in vitro threshold. Lesion characteristics, such as activity and morphology of enamel lesion, may also influence caries-removal properties of the method. © Wiley Periodicals, Inc.

  14. Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoqian; Tian, Jie; Chen, Zhe

    2010-03-01

    Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.

  15. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2002-01-01

    The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.

  16. Landmark-guided diffeomorphic demons algorithm and its application to automatic segmentation of the whole spine and pelvis in CT images.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu

    2017-03-01

    A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.

  17. Predictive models for subtypes of autism spectrum disorder based on single-nucleotide polymorphisms and magnetic resonance imaging.

    PubMed

    Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H

    2011-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.

  18. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  19. A new gas dilution method for measuring body volume.

    PubMed Central

    Nagao, N; Tamaki, K; Kuchiki, T; Nagao, M

    1995-01-01

    This study was designed to examine the validity of a new gas dilution method (GD) for measuring human body volume and to compare its accuracy with the results obtained by the underwater weighing method (UW). We measured the volume of plastic bottles and 16 subjects (including two females), aged 18-42 years with each method. For the bottles, the volume measured by hydrostatic weighing was correlated highly (r = 1.000) with that measured by the new gas dilution method. For the subjects, the body volume determined by the two methods was significantly correlated (r = 0.998). However, the subject's volume measured by the gas dilution method was significantly larger than that by underwater weighing method. There was significant correlation (r = 0.806) between GD volume-UW volume and the body mass index (BMI), so that UW volume could be predicted from GD volume and BMI. It can be concluded that the new gas dilution method offers promising possibilities for future research in the population who cannot submerge underwater. PMID:7551760

  20. Application of Hydrogel in Reconstruction Surgery: Hydrogel/Fat Graft Complex Filler for Volume Reconstruction in Critical Sized Muscle Defects.

    PubMed

    Lui, Y F; Ip, W Y

    2016-01-01

    Autogenic fat graft usually suffers from degeneration and volume shrinkage in volume reconstruction applications. How to maintain graft viability and graft volume is an essential consideration in reconstruction therapies. In this presented investigation, a new fat graft transplantation method was developed aiming to improve long term graft viability and volume reconstruction effect by incorporation of hydrogel. The harvested fat graft is dissociated into small fragments and incorporated into a collagen based hydrogel to form a hydrogel/fat graft complex for volume reconstruction purpose. In vitro results indicate that the collagen based hydrogel can significantly improve the survivability of cells inside isolated graft. In a 6-month investigation on artificial created defect model, this hydrogel/fat graft complex filler has demonstrated the ability of promoting fat pad formation inside the targeted defect area. The newly generated fat pad can cover the whole defect and restore its original dimension in 6-month time point. Compared to simple fat transplantation, this hydrogel/fat graft complex system provides much improvement on long term volume restoration effect against degeneration and volume shrinkage. One notable effect is that there is continuous proliferation of adipose tissue throughout the 6-month period. In summary, the hydrogel/fat graft system presented in this investigation demonstrated a better and more significant effect on volume reconstruction in large sized volume defect than simple fat transplantation.

  1. Online blind source separation using incremental nonnegative matrix factorization with volume constraint.

    PubMed

    Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei

    2011-04-01

    Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.

  2. Correlation Characterization of Particles in Volume Based on Peak-to-Basement Ratio

    PubMed Central

    Vovk, Tatiana A.; Petrov, Nikolay V.

    2017-01-01

    We propose a new express method of the correlation characterization of the particles suspended in the volume of optically transparent medium. It utilizes inline digital holography technique for obtaining two images of the adjacent layers from the investigated volume with subsequent matching of the cross-correlation function peak-to-basement ratio calculated for these images. After preliminary calibration via numerical simulation, the proposed method allows one to quickly distinguish parameters of the particle distribution and evaluate their concentration. The experimental verification was carried out for the two types of physical suspensions. Our method can be applied in environmental and biological research, which includes analyzing tools in flow cytometry devices, express characterization of particles and biological cells in air and water media, and various technical tasks, e.g. the study of scattering objects or rapid determination of cutting tool conditions in mechanisms. PMID:28252020

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Benjamin M., E-mail: bmwhite@mednet.ucla.edu; Lamb, James M.; Low, Daniel A.

    Purpose: To characterize radiation therapy patient breathing patterns based on measured external surrogate information. Methods: Breathing surrogate data were collected during 4DCT from a cohort of 50 patients including 28 patients with lung cancer and 22 patients without lung cancer. A spirometer and an abdominal pneumatic bellows were used as the surrogates. The relationship between these measurements was assumed to be linear within a small phase difference. The signals were correlated and drift corrected using a previously published method to convert the signal into tidal volume. The airflow was calculated with a first order time derivative of the tidal volumemore » using a window centered on the point of interest and with a window length equal to the CT gantry rotation period. The airflow was compared against the tidal volume to create ellipsoidal patterns that were binned into 25 ml × 25 ml/s bins to determine the relative amount of time spent in each bin. To calculate the variability of the maximum inhalation tidal volume within a free-breathing scan timeframe, a metric based on percentile volume ratios was defined. The free breathing variability metric (κ) was defined as the ratio between extreme inhalation tidal volumes (defined as >93 tidal volume percentile of the measured tidal volume) and normal inhalation tidal volume (defined as >80 tidal volume percentile of the measured tidal volume). Results: There were three observed types of volume-flow curves, labeled Types 1, 2, and 3. Type 1 patients spent a greater duration of time during exhalation withκ = 1.37 ± 0.11. Type 2 patients had equal time duration spent during inhalation and exhalation with κ = 1.28 ± 0.09. The differences between the mean peak exhalation to peak inhalation tidal volume, breathing period, and the 85th tidal volume percentile for Type 1 and Type 2 patients were statistically significant at the 2% significance level. The difference between κ and the 98th tidal volume percentile for Type 1 and Type 2 patients was found to be statistically significant at the 1% significance level. Three patients did not display a breathing stability curve that could be classified as Type 1 or Type 2 due to chaotic breathing patterns. These patients were classified as Type 3 patients. Conclusions: Based on an observed volume-flow curve pattern, the cohort of 50 patients was divided into three categories called Type 1, Type 2, and Type 3. There were statistically significant differences in breathing characteristics between Type 1 and Type 2 patients. The use of volume-flow curves to classify patients has been demonstrated as a physiological characterization metric that has the potential to optimize gating windows in radiation therapy.« less

  4. Stable Artificial Dissipation Operators for Finite Volume Schemes on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Svard, Magnus; Gong, Jing; Nordstrom, Jan

    2006-01-01

    Our objective is to derive stable first-, second- and fourth-order artificial dissipation operators for node based finite volume schemes. Of particular interest are general unstructured grids where the strength of the finite volume method is fully utilized. A commonly used finite volume approximation of the Laplacian will be the basis in the construction of the artificial dissipation. Both a homogeneous dissipation acting in all directions with equal strength and a modification that allows different amount of dissipation in different directions are derived. Stability and accuracy of the new operators are proved and the theoretical results are supported by numerical computations.

  5. Fabricating biomedical origami: a state-of-the-art review

    PubMed Central

    Johnson, Meredith; Chen, Yue; Hovet, Sierra; Xu, Sheng; Wood, Bradford; Ren, Hongliang; Tokuda, Junichi; Tse, Zion Tsz Ho

    2018-01-01

    Purpose Origami-based biomedical device design is an emerging technology due to its ability to be deployed from a minimal foldable pattern to a larger volume. This paper aims to review state-of-the-art origami structures applied in the medical device field. Methods Publications and reports of origami structure related to medical device design from the past 10 years are reviewed and categorized according to engineering specifications, including the application field, fabrication material, size/volume, deployment method, manufacturability, and advantages. Results This paper presents an overview of the biomedical applications of devices based on origami structures, including disposable sterilization covers, cardiac catheterization, stent grafts, encapsulation and microsurgery, gastrointestinal microsurgery, laparoscopic surgical grippers, microgrippers, microfluidic devices, and drug delivery. Challenges in terms of materials and fabrication, assembly, modeling and computation design, and clinical adoptability are discussed at the end of this paper to provide guidance for future origami-based design in the medical device field. Conclusion Concepts from origami can be used to design and develop novel medical devices. Origami-based medical device design is currently progressing, with researchers improving design methods, materials, fabrication techniques, and folding efficiency. PMID:28260164

  6. Automatic and manual segmentation of healthy retinas using high-definition optical coherence tomography.

    PubMed

    Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe

    2011-03-01

    This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.

  7. A simplified method to recover urinary vesicles for clinical applications, and sample banking.

    PubMed

    Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry

    2014-12-23

    Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking.

  8. A Simplified Method to Recover Urinary Vesicles for Clinical Applications, and Sample Banking

    PubMed Central

    Musante, Luca; Tataruch, Dorota; Gu, Dongfeng; Benito-Martin, Alberto; Calzaferri, Giulio; Aherne, Sinead; Holthofer, Harry

    2014-01-01

    Urinary extracellular vesicles provide a novel source for valuable biomarkers for kidney and urogenital diseases: Current isolation protocols include laborious, sequential centrifugation steps which hampers their widespread research and clinical use. Furthermore, large individual urine sample volumes or sizable target cohorts are to be processed (e.g. for biobanking), the storage capacity is an additional problem. Thus, alternative methods are necessary to overcome such limitations. We have developed a practical vesicle isolation technique to yield easily manageable sample volumes in an exceptionally cost efficient way to facilitate their full utilization in less privileged environments and maximize the benefit of biobanking. Urinary vesicles were isolated by hydrostatic dialysis with minimal interference of soluble proteins or vesicle loss. Large volumes of urine were concentrated up to 1/100 of original volume and the dialysis step allowed equalization of urine physico-chemical characteristics. Vesicle fractions were found suitable to any applications, including RNA analysis. In the yield, our hydrostatic filtration dialysis system outperforms the conventional ultracentrifugation-based methods and the labour intensive and potentially hazardous step of ultracentrifugations are eliminated. Likewise, the need for trained laboratory personnel and heavy initial investment is avoided. Thus, our method qualifies as a method for laboratories working with urinary vesicles and biobanking. PMID:25532487

  9. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  10. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005

  11. Influence of thermal expansion on shrinkage during photopolymerization of dental resins based on bis-GMA/TEGDMA.

    PubMed

    Mucci, Veronica; Arenas, Gustavo; Duchowicz, Ricardo; Cook, Wayne D; Vallo, Claudia

    2009-01-01

    The aim of this study was to assess volume changes that occur during photopolymerization of unfilled dental resins based on bis-GMA-TEGDMA. The resins were activated for visible light polymerization by the addition of camphorquinone (CQ) in combination with dimethylamino ethylmethacrylate (DMAEMA) or ethyl-4-dimethyl aminobenzoate (EDMAB). A fibre-optic sensing method based on a Fizeau-type interferometric scheme was employed for monitoring contraction during photopolymerization. Measurements were carried out on 10mm diameter specimens of different thicknesses (1 and 2mm). The high exothermic nature of the polymerization resulted in volume expansion during the heating, and this effect was more pronounced when the sample thickness increased. Two approaches to assess volume changes due to thermal effects are presented. Due to the difference in thermal expansion coefficients between the rubbery and glassy resins, the increase of volume due to thermal expansion was greater than the decrease in volume due to thermal contraction. As a result, the volume of the vitrified resins was greater than that calculated from polymerization contraction. The observed trends of shrinkage versus sample thickness are explained in terms of light attenuation across the path length during photopolymerization. Results obtained in this research highlight the inherent interlinking of non-isothermal photopolymerization and volumetric changes in bulk polymerizing systems.

  12. Timber Volume and Biomass Estimates in Central Siberia from Satellite Data

    NASA Technical Reports Server (NTRS)

    Ranson, K. Jon; Kimes, Daniel S.; Kharuk, Vyetcheslav I.

    2007-01-01

    Mapping of boreal forest's type, structure parameters and biomass are critical for understanding the boreal forest's significance in the carbon cycle, its response to and impact on global climate change. The biggest deficiency of the existing ground based forest inventories is the uncertainty in the inventory data, particularly in remote areas of Siberia where sampling is sparse, lacking, and often decades old. Remote sensing methods can help overcome these problems. In this joint US and Russian study, we used the moderate resolution imaging spectroradiometer (MODIS) and unique waveform data of the geoscience laser altimeter system (GLAS) and produced a map of timber volume for a 10degx12deg area in Central Siberia. Using these methods, the mean timber volume for the forested area in the total study area was 203 m3/ ha. The new remote sensing methods used in this study provide a truly independent estimate of forest structure, which is not dependent on traditional ground forest inventory methods.

  13. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence.

    PubMed

    Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma

    2017-11-14

    The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.

  14. A panning DLT procedure for three-dimensional videography.

    PubMed

    Yu, B; Koh, T J; Hay, J G

    1993-06-01

    The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.

  15. An effective method to screen sodium-based layered materials for sodium ion batteries

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Zhang, Zihe; Yao, Sai; Chen, An; Zhao, Xudong; Zhou, Zhen

    2018-03-01

    Due to the high cost and insufficient resource of lithium, sodium-ion batteries are widely investigated for large-scale applications. Typically, insertion-type materials possess better cyclic stability than alloy-type and conversion-type ones. Therefore, in this work, we proposed a facile and effective method to screen sodium-based layered materials based on Materials Project database as potential candidate insertion-type materials for sodium ion batteries. The obtained Na-based layered materials contains 38 kinds of space group, which reveals that the credibility of our screening approach would not be affected by the space group. Then, some important indexes of the representative materials, including the average voltage, volume change and sodium ion mobility, were further studied by means of density functional theory computations. Some materials with extremely low volume changes and Na diffusion barriers are promising candidates for sodium ion batteries. We believe that our classification algorithm could also be used to search for other alkali and multivalent ion-based layered materials, to accelerate the development of battery materials.

  16. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  17. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    PubMed

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  18. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  19. Accuracy of predicted haemoglobin concentration on cardiopulmonary bypass in paediatric cardiac surgery: effect of different formulae for estimating patient blood volume.

    PubMed

    Redlin, Matthias; Boettcher, Wolfgang; Dehmel, Frank; Cho, Mi-Young; Kukucka, Marian; Habazettl, Helmut

    2017-11-01

    When applying a blood-conserving approach in paediatric cardiac surgery with the aim of reducing the transfusion of homologous blood products, the decision to use blood or blood-free priming of the cardiopulmonary bypass (CPB) circuit is often based on the predicted haemoglobin concentration (Hb) as derived from the pre-CPB Hb, the prime volume and the estimated blood volume. We assessed the accuracy of this approach and whether it may be improved by using more sophisticated methods of estimating the blood volume. Data from 522 paediatric cardiac surgery patients treated with CPB with blood-free priming in a 2-year period from May 2013 to May 2015 were collected. Inclusion criteria were body weight <15 kg and available Hb data immediately prior to and after the onset of CPB. The Hb on CPB was predicted according to Fick's principle from the pre-CPB Hb, the prime volume and the patient blood volume. Linear regression analyses and Bland-Altman plots were used to assess the accuracy of the Hb prediction. Different methods to estimate the blood volume were assessed and compared. The initial Hb on CPB correlated well with the predicted Hb (R 2 =0.87, p<0.001). A Bland-Altman plot revealed little bias at 0.07 g/dL and an area of agreement from -1.35 to 1.48 g/dL. More sophisticated methods of estimating blood volume from lean body mass did not improve the Hb prediction, but rather increased bias. Hb prediction is reasonably accurate, with the best result obtained with the simplest method of estimating the blood volume at 80 mL/kg body weight. When deciding for or against blood-free priming, caution is necessary when the predicted Hb lies in a range of ± 2 g/dL around the transfusion trigger.

  20. Atlas based brain volumetry: How to distinguish regional volume changes due to biological or physiological effects from inherent noise of the methodology.

    PubMed

    Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen

    2016-05-01

    Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Evaluation of the effect of localized skin cooling on nasal airway volume by acoustic rhinometry.

    PubMed

    Yamagiwa, M; Hilberg, O; Pedersen, O F; Lundqvist, G R

    1990-04-01

    Ten healthy subjects (four men and six women) were subjected to localized skin cooling by submersion for 5 min of both feet and, in another experiment, one hand and forearm into ice-cold water. Repeated measurements of nasal cavity volumes by a new method, acoustic rhinometry, showed characteristic patterns ranging from marked increases in volumes lasting the entire exposure period to transient monophasic or biphasic responses to no change at all. The pattern in individual subjects was reproducible with the two methods of cooling, and it could be characterized by five types when related to baseline measurements during the preexposure period. Because of large minute-to-minute variations, probably determined by local differences and fluctuations in blood flow in tissues through the nose, evaluation of induced changes in the nasal cavity volume cannot be based on single measurements as has frequently been done in the past by using rhinomanometry as the experimental method. The mechanisms behind the characteristic patterns in immediate human nasal response to local skin cooling challenge remains to be explored.

  2. Direct biomechanical modeling of trabecular bone using a nonlinear manifold-based volumetric representation

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.

  3. Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings

    PubMed Central

    Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.

    2015-01-01

    In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736

  4. a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar

    NASA Astrophysics Data System (ADS)

    Dehnavi, S.; Maghsoudi, Y.

    2015-12-01

    Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.

  5. Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR

    NASA Astrophysics Data System (ADS)

    Bloch, C.; Figl, M.; Gendrin, C.; Weber, C.; Unger, E.; Aldrian, S.; Birkfellner, W.

    2010-02-01

    A method for studying the in vivo kinematics of complex joints is presented. It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume. With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes. The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments. For a first study a human cadaver hand was scanned and the method was evaluated with artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12° rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm and 5°. First evaluations using real data from a cine MR were promising. The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation of the cine MR images. We therefore plan to examine different parameters for the image acquisition in future studies.

  6. Systems for Lung Volume Standardization during Static and Dynamic MDCT-based Quantitative Assessment of Pulmonary Structure and Function

    PubMed Central

    Fuld, Matthew K.; Grout, Randall; Guo, Junfeng; Morgan, John H.; Hoffman, Eric A.

    2013-01-01

    Rationale and Objectives Multidetector-row Computed Tomography (MDCT) has emerged as a tool for quantitative assessment of parenchymal destruction, air trapping (density metrics) and airway remodeling (metrics relating airway wall and lumen geometry) in chronic obstructive pulmonary disease (COPD) and asthma. Critical to the accuracy and interpretability of these MDCT-derived metrics is the assurance that the lungs are scanned during a breath-hold at a standardized volume. Materials and Methods A computer monitored turbine-based flow meter system was developed to control patient breath-holds and facilitate static imaging at fixed percentages of the vital capacity. Due to calibration challenges with gas density changes during multi-breath xenon-CT an alternative system was required. The design incorporated dual rolling seal pistons. Both systems were tested in a laboratory environment and human subject trials. Results The turbine-based system successfully controlled lung volumes in 32/37 subjects, having a linear relationship for CT measured air volume between repeated scans: for all scans, the mean and confidence interval of the differences (scan1-scan2) was −9 ml (−169, 151); for TLC alone 6 ml (−164, 177); for FRC alone, −23 ml (−172, 126). The dual-piston system successfully controlled lung volume in 31/41 subjects. Study failures related largely to subject non-compliance with verbal instruction and gas leaks around the mouthpiece. Conclusion We demonstrate the successful use of a turbine-based system for static lung volume control and demonstrate its inadequacies for dynamic xenon-CT studies. Implementation of a dual-rolling seal spirometer has been shown to adequately control lung volume for multi-breath wash-in xenon-CT studies. These systems coupled with proper patient coaching provide the tools for the use of CT to quantitate regional lung structure and function. The wash-in xenon-CT method for assessing regional lung function, while not necessarily practical for routine clinical studies, provides for a dynamic protocol against which newly emerging single breath, dual-energy xenon-CT measures can be validated. PMID:22555001

  7. Quantitative vibro-acoustography of tissue-like objects by measurement of resonant modes

    NASA Astrophysics Data System (ADS)

    Mazumder, Dibbyan; Umesh, Sharath; Mohan Vasu, Ram; Roy, Debasish; Kanhirodan, Rajan; Asokan, Sundarrajan

    2017-01-01

    We demonstrate a simple and computationally efficient method to recover the shear modulus pertaining to the focal volume of an ultrasound transducer from the measured vibro-acoustic spectral peaks. A model that explains the transport of local deformation information with the acoustic wave acting as a carrier is put forth. It is also shown that the peaks correspond to the natural frequencies of vibration of the focal volume, which may be readily computed by solving an eigenvalue problem associated with the vibrating region. Having measured the first natural frequency with a fibre Bragg grating sensor, and armed with an expedient means of computing the same, we demonstrate a simple procedure, based on the method of bisection, to recover the average shear modulus of the object in the ultrasound focal volume. We demonstrate this recovery for four homogeneous agarose slabs of different stiffness and verify the accuracy of the recovery using independent rheometer-based measurements. Extension of the method to anisotropic samples through the measurement of a more complete set of resonant modes and the recovery of an elasticity tensor distribution, as is done in resonant ultrasound spectroscopy, is suggested.

  8. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.

  9. Cross-sectional and longitudinal evaluation of liver volume and total liver fat burden in adults with nonalcoholic steatohepatitis

    PubMed Central

    Tang, An; Chen, Joshua; Le, Thuy-Anh; Changchien, Christopher; Hamilton, Gavin; Middleton, Michael S.; Loomba, Rohit; Sirlin, Claude B.

    2014-01-01

    Purpose To explore the cross-sectional and longitudinal relationships between fractional liver fat content, liver volume, and total liver fat burden. Methods In 43 adults with non-alcoholic steatohepatitis participating in a clinical trial, liver volume was estimated by segmentation of magnitude-based low-flip-angle multiecho GRE images. The liver mean proton density fat fraction (PDFF) was calculated. The total liver fat index (TLFI) was estimated as the product of liver mean PDFF and liver volume. Linear regression analyses were performed. Results Cross-sectional analyses revealed statistically significant relationships between TLFI and liver mean PDFF (R2 = 0.740 baseline/0.791 follow-up, P < 0.001 baseline/P < 0.001 follow-up), and between TLFI and liver volume (R2 = 0.352/0.452, P < 0.001/< 0.001). Longitudinal analyses revealed statistically significant relationships between liver volume change and liver mean PDFF change (R2 = 0.556, P < 0.001), between TLFI change and liver mean PDFF change (R2 = 0.920, P < 0.001), and between TLFI change and liver volume change (R2 = 0.735, P < 0.001). Conclusion Liver segmentation in combination with MRI-based PDFF estimation may be used to monitor liver volume, liver mean PDFF, and TLFI in a clinical trial. PMID:25015398

  10. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  11. Beach disturbance caused by off-road vehicles (ORVs) on sandy shores: relationship with traffic volumes and a new method to quantify impacts using image-based data acquisition and analysis.

    PubMed

    Schlacher, Thomas A; Morrison, Jennifer M

    2008-09-01

    Vehicles cause environmental damage on sandy beaches, including physical displacement and compaction of the sediment. Such physical habitat disturbance provides a relatively simple indicator of ORV-related impacts that is potentially useful in monitoring the efficacy of beach traffic management interventions; such interventions also require data on the relationship between traffic volumes and the resulting levels of impact. Here we determined how the extent of beach disturbance is linked to traffic volumes and tested the utility of image-based data acquisition to monitor beach surfaces. Experimental traffic application resulted in disturbance effects ranging from 15% of the intertidal zone being rutted after 10 vehicle passes to 85% after 100 passes. A new camera platform, specifically designed for beach surveys, was field tested and the resulting image-based data compared with traditional line-intercept methods and in situ measurements using quadrats. All techniques gave similar results in terms of quantifying the relationship between traffic intensity and beach disturbance. However, the physical, in situ measurements, using quadrats, generally produced higher (+4.68%) estimates than photos taken with the camera platform coupled with off-site image analysis. Image-based methods can be more costly, but in politically and socially sensitive monitoring applications, such as ORV use on sandy beaches, they are superior in providing unbiased and permanent records of environmental conditions in relation to anthropogenic pressures.

  12. The anisotropic Hooke's law for cancellous bone and wood.

    PubMed

    Yang, G; Kabel, J; van Rietbergen, B; Odgaard, A; Huiskes, R; Cowin, S C

    A method of data analysis for a set of elastic constant measurements is applied to data bases for wood and cancellous bone. For these materials the identification of the type of elastic symmetry is complicated by the variable composition of the material. The data analysis method permits the identification of the type of elastic symmetry to be accomplished independent of the examination of the variable composition. This method of analysis may be applied to any set of elastic constant measurements, but is illustrated here by application to hardwoods and softwoods, and to an extraordinary data base of cancellous bone elastic constants. The solid volume fraction or bulk density is the compositional variable for the elastic constants of these natural materials. The final results are the solid volume fraction dependent orthotropic Hooke's law for cancellous bone and a bulk density dependent one for hardwoods and softwoods.

  13. Effective 2D-3D medical image registration using Support Vector Machine.

    PubMed

    Qi, Wenyuan; Gu, Lixu; Zhao, Qiang

    2008-01-01

    Registration of pre-operative 3D volume dataset and intra-operative 2D images gradually becomes an important technique to assist radiologists in diagnosing complicated diseases easily and quickly. In this paper, we proposed a novel 2D/3D registration framework based on Support Vector Machine (SVM) to compensate the disadvantages of generating large number of DRR images in the stage of intra-operation. Estimated similarity metric distribution could be built up from the relationship between parameters of transform and prior sparse target metric values by means of SVR method. Based on which, global optimal parameters of transform are finally searched out by an optimizer in order to guide 3D volume dataset to match intra-operative 2D image. Experiments reveal that our proposed registration method improved performance compared to conventional registration method and also provided a precise registration result efficiently.

  14. Three-body unitarity in the finite volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, M.; Döring, M.

    We present the physical interpretation of lattice QCD simulations, performed in a small volume, requires an extrapolation to the infinite volume. A method is proposed to perform such an extrapolation for three interacting particles at energies above threshold. For this, a recently formulated relativisticmore » $$3\\to 3$$ amplitude based on the isobar formulation is adapted to the finite volume. The guiding principle is two- and three-body unitarity that imposes the imaginary parts of the amplitude in the infinite volume. In turn, these imaginary parts dictate the leading power-law finite-volume effects. It is demonstrated that finite-volume poles arising from the singular interaction, from the external two-body sub-amplitudes, and from the disconnected topology cancel exactly leaving only the genuine three-body eigenvalues. Lastly, the corresponding quantization condition is derived for the case of three identical scalar-isoscalar particles and its numerical implementation is demonstrated.« less

  15. Three-body unitarity in the finite volume

    DOE PAGES

    Mai, M.; Döring, M.

    2017-12-18

    We present the physical interpretation of lattice QCD simulations, performed in a small volume, requires an extrapolation to the infinite volume. A method is proposed to perform such an extrapolation for three interacting particles at energies above threshold. For this, a recently formulated relativisticmore » $$3\\to 3$$ amplitude based on the isobar formulation is adapted to the finite volume. The guiding principle is two- and three-body unitarity that imposes the imaginary parts of the amplitude in the infinite volume. In turn, these imaginary parts dictate the leading power-law finite-volume effects. It is demonstrated that finite-volume poles arising from the singular interaction, from the external two-body sub-amplitudes, and from the disconnected topology cancel exactly leaving only the genuine three-body eigenvalues. Lastly, the corresponding quantization condition is derived for the case of three identical scalar-isoscalar particles and its numerical implementation is demonstrated.« less

  16. Numerical simulation of convective heat transfer of nonhomogeneous nanofluid using Buongiorno model

    NASA Astrophysics Data System (ADS)

    Sayyar, Ramin Onsor; Saghafian, Mohsen

    2017-08-01

    The aim is to study the assessment of the flow and convective heat transfer of laminar developing flow of Al2O3-water nanofluid inside a vertical tube. A finite volume method procedure on a structured grid was used to solve the governing partial differential equations. The adopted model (Buongiorno model) assumes that the nanofluid is a mixture of a base fluid and nanoparticles, with the relative motion caused by Brownian motion and thermophoretic diffusion. The results showed the distribution of nanoparticles remained almost uniform except in a region near the hot wall where nanoparticles volume fraction were reduced as a result of thermophoresis. The simulation results also indicated there is an optimal volume fraction about 1-2% of the nanoparticles at each Reynolds number for which the maximum performance evaluation criteria can be obtained. The difference between Nusselt number and nondimensional pressure drop calculated based on two phase model and the one calculated based on single phase model was less than 5% at all nanoparticles volume fractions and can be neglected. In natural convection, for 4% of nanoparticles volume fraction, in Gr = 10 more than 15% enhancement of Nusselt number was achieved but in Gr = 300 it was less than 1%.

  17. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    PubMed Central

    Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long

    2012-01-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749

  18. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, John; Nicholson, Stephen; Moore, Joan G.

    1986-01-01

    The development of a computational capability to handle viscous flow with an explicit time-marching method based on the finite volume approach is summarized. Emphasis is placed on the extensions to the computational procedure which allow the handling of shock induced separation and large regions of strong backflow. Appendices contain abstracts of papers and whole reports generated during the contract period.

  19. Comparative seed-tree and selection harvesting costs in young-growth mixed-conifer stands

    Treesearch

    William A. Atkinson; Dale O. Hall

    1963-01-01

    Little difference was found between yarding and felling costs in seed-tree and selection harvest cuts. The volume per acre logged was 23,800 board feet on the seed-tree compartments and 10,600 board feet on the selection compartments. For a comparable operation with this range of volumes, cutting method decisions should be based on factors other than logging costs....

  20. Brown Adipose Tissue Quantification in Human Neonates Using Water-Fat Separated MRI

    PubMed Central

    Rasmussen, Jerod M.; Entringer, Sonja; Nguyen, Annie; van Erp, Theo G. M.; Guijarro, Ana; Oveisi, Fariba; Swanson, James M.; Piomelli, Daniele; Wadhwa, Pathik D.

    2013-01-01

    There is a major resurgence of interest in brown adipose tissue (BAT) biology, particularly regarding its determinants and consequences in newborns and infants. Reliable methods for non-invasive BAT measurement in human infants have yet to be demonstrated. The current study first validates methods for quantitative BAT imaging of rodents post mortem followed by BAT excision and re-imaging of excised tissues. Identical methods are then employed in a cohort of in vivo infants to establish the reliability of these measures and provide normative statistics for BAT depot volume and fat fraction. Using multi-echo water-fat MRI, fat- and water-based images of rodents and neonates were acquired and ratios of fat to the combined signal from fat and water (fat signal fraction) were calculated. Neonatal scans (n = 22) were acquired during natural sleep to quantify BAT and WAT deposits for depot volume and fat fraction. Acquisition repeatability was assessed based on multiple scans from the same neonate. Intra- and inter-rater measures of reliability in regional BAT depot volume and fat fraction quantification were determined based on multiple segmentations by two raters. Rodent BAT was characterized as having significantly higher water content than WAT in both in situ as well as ex vivo imaging assessments. Human neonate deposits indicative of bilateral BAT in spinal, supraclavicular and axillary regions were observed. Pairwise, WAT fat fraction was significantly greater than BAT fat fraction throughout the sample (ΔWAT-BAT = 38%, p<10−4). Repeated scans demonstrated a high voxelwise correlation for fat fraction (Rall = 0.99). BAT depot volume and fat fraction measurements showed high intra-rater (ICCBAT,VOL = 0.93, ICCBAT,FF = 0.93) and inter-rater reliability (ICCBAT,VOL = 0.86, ICCBAT,FF = 0.93). This study demonstrates the reliability of using multi-echo water-fat MRI in human neonates for quantification throughout the torso of BAT depot volume and fat fraction measurements. PMID:24205024

  1. Planning the Breast Boost: Comparison of Three Techniques and Evolution of Tumor Bed During Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepel, Jaroslaw T.; Department of Radiation Oncology, Brown University, Rhode Island Hospital, Providence, RI; Evans, Suzanne B.

    2009-06-01

    Purpose: To evaluate the accuracy of two clinical techniques for electron boost planning compared with computed tomography (CT)-based planning. Additionally, we evaluated the tumor bed characteristics at whole breast planning and boost planning. Methods and Materials: A total of 30 women underwent tumor bed boost planning within 2 weeks of completing whole breast radiotherapy using three planning techniques: scar-based planning, palpation/clinical-based planning, and CT-based planning. The plans were analyzed for dosimetric coverage of the CT-delineated tumor bed. The cavity visualization score was used to define the CT-delineated tumor bed as well or poorly defined. Results: Scar-based planning resulted in inferiormore » tumor bed coverage compared with CT-based planning, with the minimal dose received by 90% of the target volume >90% in 53% and a geographic miss in 53%. The results of palpation/clinical-based planning were significantly better: 87% and 10% for the minimal dose received by 90% of the target volume >90% and geographic miss, respectively. Of the 30 tumor beds, 16 were poorly defined by the cavity visualization score. Of these 16, 8 were well demarcated by the surgical clips. The evaluation of the 22 well-defined tumor beds revealed similar results. A comparison of the tumor bed volume from the initial planning CT scan to the boost planning CT scan revealed a decrease in size in 77% of cases. The mean decrease in volume was 52%. Conclusion: The results of our study have shown that CT-based planning allows for optimal tumor bed coverage compared with clinical and scar-based approaches. However, in the setting of a poorly visualized cavity on CT without surgical clips, palpation/clinical-based planning can help delineate the appropriate target volumes and is superior to scar-based planning. CT simulation at boost planning could allow for a reduction in the boost volumes.« less

  2. EPA Method 1615. Measurement of Enterovirus and Norovirus Occurrence in Water by Culture and RT-qPCR. I. Collection of Virus Samples

    PubMed Central

    Fout, G. Shay; Cashdollar, Jennifer L.; Varughese, Eunice A.; Parshionikar, Sandhya U.; Grimm, Ann C.

    2015-01-01

    EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections. PMID:25867928

  3. A machine-learning graph-based approach for 3D segmentation of Bruch's membrane opening from glaucomatous SD-OCT volumes.

    PubMed

    Miri, Mohammad Saleh; Abràmoff, Michael D; Kwon, Young H; Sonka, Milan; Garvin, Mona K

    2017-07-01

    Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches. Published by Elsevier B.V.

  4. SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A

    2014-06-01

    Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less

  5. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  6. Towards a comprehensive framework for movement and distortion correction of diffusion MR images: Within volume movement.

    PubMed

    Andersson, Jesper L R; Graham, Mark S; Drobnjak, Ivana; Zhang, Hui; Filippini, Nicola; Bastiani, Matteo

    2017-05-15

    Most motion correction methods work by aligning a set of volumes together, or to a volume that represents a reference location. These are based on an implicit assumption that the subject remains motionless during the several seconds it takes to acquire all slices in a volume, and that any movement occurs in the brief moment between acquiring the last slice of one volume and the first slice of the next. This is clearly an approximation that can be more or less good depending on how long it takes to acquire one volume and in how rapidly the subject moves. In this paper we present a method that increases the temporal resolution of the motion correction by modelling movement as a piecewise continous function over time. This intra-volume movement correction is implemented within a previously presented framework that simultaneously estimates distortions, movement and movement-induced signal dropout. We validate the method on highly realistic simulated data containing all of these effects. It is demonstrated that we can estimate the true movement with high accuracy, and that scalar parameters derived from the data, such as fractional anisotropy, are estimated with greater fidelity when data has been corrected for intra-volume movement. Importantly, we also show that the difference in fidelity between data affected by different amounts of movement is much reduced when taking intra-volume movement into account. Additional validation was performed on data from a healthy volunteer scanned when lying still and when performing deliberate movements. We show an increased correspondence between the "still" and the "movement" data when the latter is corrected for intra-volume movement. Finally we demonstrate a big reduction in the telltale signs of intra-volume movement in data acquired on elderly subjects. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Segmentation of brain volume based on 3D region growing by integrating intensity and edge for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu

    2006-03-01

    This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.

  8. Microscale Concentration Measurements Using Laser Light Scattering Methods

    NASA Technical Reports Server (NTRS)

    Niederhaus, Charles; Miller, Fletcher

    2004-01-01

    The development of lab-on-a-chip devices for microscale biochemical assays has led to the need for microscale concentration measurements of specific analyses. While fluorescence methods are the current choice, this method requires developing fluorophore-tagged conjugates for each analyte of interest. In addition, fluorescent imaging is also a volume-based method, and can be limiting as smaller detection regions are required.

  9. The Combination of Micro Diaphragm Pumps and Flow Sensors for Single Stroke Based Liquid Flow Control

    PubMed Central

    Jenke, Christoph; Pallejà Rubio, Jaume; Kibler, Sebastian; Häfner, Johannes; Richter, Martin; Kutter, Christoph

    2017-01-01

    With the combination of micropumps and flow sensors, highly accurate and secure closed-loop controlled micro dosing systems for liquids are possible. Implementing a single stroke based control mode with piezoelectrically driven micro diaphragm pumps can provide a solution for dosing of volumes down to nanoliters or variable average flow rates in the range of nL/min to μL/min. However, sensor technologies feature a yet undetermined accuracy for measuring highly pulsatile micropump flow. Two miniaturizable in-line sensor types providing electrical readout—differential pressure based flow sensors and thermal calorimetric flow sensors—are evaluated for their suitability of combining them with mircopumps. Single stroke based calibration of the sensors was carried out with a new method, comparing displacement volumes and sensor flow volumes. Limitations of accuracy and performance for single stroke based flow control are described. Results showed that besides particle robustness of sensors, controlling resistive and capacitive damping are key aspects for setting up reproducible and reliable liquid dosing systems. Depending on the required average flow or defined volume, dosing systems with an accuracy of better than 5% for the differential pressure based sensor and better than 6.5% for the thermal calorimeter were achieved. PMID:28368344

  10. Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images

    NASA Astrophysics Data System (ADS)

    Sohrabi, H.

    2012-07-01

    In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.

  11. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  12. Distribution of extravascular fluid volumes in isolated perfused lungs measured with H215O.

    PubMed Central

    Jones, T; Jones, H A; Rhodes, C G; Buckingham, P D; Hughes, J M

    1976-01-01

    The distributions per unit volume of extravascular water (EVLW), blood volume, and blood flow were measured in isolated perfused vertical dog lungs. A steady-state tracer technique was employed using oxygen-15, carbon-11, and nitrogen-13 isotopes and external scintillation counting of the 511-KeV annihilation radiation common to all three radionuclides. EVLW, and blood volume and flow increased from apex to base in all preparations, but the gradient of increasing flow exceeded that for blood and EVLW volumes. The regional distributions of EVLW and blood volume were almost identical. With increasing edema, lower-zone EVLW increased slightly relative to that in the upper zone. There was no change in the distribution of blood volume or flow until gross edema (100% wt gain) occurred when lower zone values were reduced. In four lungs the distribution of EVLW was compared with wet-to-dry ratios from lung biopsies taken immediately afterwards. Whereas the isotopically measured EVLW increased from apex to base, the wet-to-dry weight ratios remained essentially uniform. We concluded that isotopic methods measure only an "exchangeable" water pool whose volume is dependent on regional blood flow and capillary recruitment. Second, the isolated perfused lung can accommodate up to 60% wt gain without much change in the regional distribution of EVLW, volume, or flow. PMID:765354

  13. Multiscale Modeling of Carbon Nanotube-Epoxy Nanocomposites

    NASA Astrophysics Data System (ADS)

    Fasanella, Nicholas A.

    Epoxy-composites are widely used in the aerospace industry. In order to improve upon stiffness and thermal conductivity; carbon nanotube additives to epoxies are being explored. This dissertation presents multiscale modeling techniques to study the engineering properties of single walled carbon nanotube (SWNT)-epoxy nanocomposites, consisting of pristine and covalently functionalized systems. Using Molecular Dynamics (MD), thermomechanical properties were calculated for a representative polymer unit cell. Finite Element (FE) and orientation distribution function (ODF) based methods were used in a multiscale framework to obtain macroscale properties. An epoxy network was built using the dendrimer growth approach. The epoxy model was verified by matching the experimental glass transition temperature, density, and dilatation. MD, via the constant valence force field (CVFF), was used to explore the mechanical and dilatometric effects of adding pristine and functionalized SWNTs to epoxy. Full stiffness matrices and linear coefficient of thermal expansion vectors were obtained. The Green-Kubo method was used to investigate the thermal conductivity as a function of temperature for the various nanocomposites. Inefficient phonon transport at the ends of nanotubes is an important factor in the thermal conductivity of the nanocomposites, and for this reason discontinuous nanotubes were modeled in addition to long nanotubes. To obtain continuum-scale elastic properties from the MD data, multiscale modeling was considered to give better control over the volume fraction of nanotubes, and investigate the effects of nanotube alignment. Two methods were considered; an FE based method, and an ODF based method. The FE method probabilistically assigned elastic properties of elements from the MD lattice results based on the desired volume fraction and alignment of the nanotubes. For the ODF method, a distribution function was generated based on the desired amount of nanotube alignment; and the stiffness matrix was calculated. A rule of mixture approach was implemented in the ODF model to vary the SWNT volume fraction. Both the ODF and FE models are compared and contrasted. ODF analysis is significantly faster for nanocomposites and is a novel contribution in this thesis. Multiscale modeling allows for the effects of nanofillers in epoxy systems to be characterized without having to run costly experiments.

  14. A Recipe for Inquiry.

    ERIC Educational Resources Information Center

    Bernstein, Jesse

    2003-01-01

    Explains the difference between traditional and inquiry-based chemistry experiments. Modifies a traditional cookbook laboratory for determining molar volume of gas to include inquiry. Also discusses methods for assessment. (Author/NB)

  15. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification

    PubMed Central

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2013-01-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527

  16. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification.

    PubMed

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-03

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  17. Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  18. A stepwedge-based method for measuring breast density: observer variability and comparison with human reading

    NASA Astrophysics Data System (ADS)

    Diffey, Jenny; Berks, Michael; Hufton, Alan; Chung, Camilla; Verow, Rosanne; Morrison, Joanna; Wilson, Mary; Boggis, Caroline; Morris, Julie; Maxwell, Anthony; Astley, Susan

    2010-04-01

    Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated, stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening programme to measure breast density by volume and area. 116 images were analysed by three independent operators to assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of glandular tissue is between 10 and 20% of the total thickness of tissue.

  19. Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging

    PubMed Central

    Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.

    2015-01-01

    We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373

  20. Component extraction on CT volumes of assembled products using geometric template matching

    NASA Astrophysics Data System (ADS)

    Muramatsu, Katsutoshi; Ohtake, Yutaka; Suzuki, Hiromasa; Nagai, Yukie

    2017-03-01

    As a method of non-destructive internal inspection, X-ray computed tomography (CT) is used not only in medical applications but also for product inspection. Some assembled products can be divided into separate components based on density, which is known to be approximately proportional to CT values. However, components whose densities are similar cannot be distinguished using the CT value driven approach. In this study, we proposed a new component extraction algorithm from the CT volume, using a set of voxels with an assigned CT value with the surface mesh as the template rather than the density. The method has two main stages: rough matching and fine matching. At the rough matching stage, the position of candidate targets is identified roughly from the CT volume, using the template of the target component. At the fine matching stage, these candidates are precisely matched with the templates, allowing the correct position of the components to be detected from the CT volume. The results of two computational experiments showed that the proposed algorithm is able to extract components with similar density within the assembled products on CT volumes.

  1. Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks.

    PubMed

    Längkvist, Martin; Jendeberg, Johan; Thunberg, Per; Loutfi, Amy; Lidén, Mats

    2018-06-01

    Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  3. Extrapolation-Based References Improve Motion and Eddy-Current Correction of High B-Value DWI Data: Application in Parkinson's Disease Dementia.

    PubMed

    Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar

    2015-01-01

    Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.

  4. Comparison of liver volumetry on contrast-enhanced CT images: one semiautomatic and two automatic approaches.

    PubMed

    Cai, Wei; He, Baochun; Fan, Yingfang; Fang, Chihua; Jia, Fucang

    2016-11-08

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p < 0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use. © 2016 The Authors.

  5. Localized Arm Volume Index: A New Method for Body Type-Corrected Evaluation of Localized Arm Lymphedematous Volume Change.

    PubMed

    Yamamoto, Takumi; Yamamoto, Nana; Yoshimatsu, Hidehiko

    2017-10-01

    Volume measurement is a common evaluation for upper extremity lymphedema. However, volume comparison between different patients with different body types may be inappropriate, and it is difficult to evaluate localized limb volume change using arm volume. Localized arm volumes (Vk, k = 1-5) and localized arm volume indices (LAVIk) at 5 points (1, upper arm; 2, elbow; 3, forearm; 4, wrist; 5, hand) of 106 arms of 53 examinees with no arm edema were calculated based on physical measurements (arm circumferences and lengths and body mass index [BMI]). Interrater and intrarater reliabilities of LAVIk were assessed, and Vk and LAVIk were compared between lower BMI (BMI, <22 kg/m) group and higher BMI (BMI, ≥22 kg/m) group. Interrater and intrarater reliabilities of LAVIk were all high (all, r > 0.98). Between lower and higher BMI groups, significant differences were observed in all Vk (V1 [P = 6.8 × 10], V2 [P = 3.1 × 10], V3 [P = 1.1 × 10], V4 [P = 8.3 × 10], and V5 [P = 3.0 × 10]). Regarding localized arm volume index (LAVI) between groups, significant differences were seen in LAVI1 (P = 9.7 × 10) and LAVI5 (P = 1.2 × 10); there was no significant difference in LAVI2 (P = 0.60), LAVI3 (P = 0.61), or LAVI4 (P = 0.22). Localized arm volume index is a convenient and highly reproducible method for evaluation of localized arm volume change, which is less affected by body physique compared with arm volumetry.

  6. A semiautomatic CT-based ensemble segmentation of lung tumors: comparison with oncologists' delineations and with the surgical specimen.

    PubMed

    Rios Velazquez, Emmanuel; Aerts, Hugo J W L; Gu, Yuhua; Goldgof, Dmitry B; De Ruysscher, Dirk; Dekker, Andre; Korn, René; Gillies, Robert J; Lambin, Philippe

    2012-11-01

    To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Mapping the Regional Influence of Genetics on Brain Structure Variability - A Tensor-Based Morphometry Study

    PubMed Central

    Brun, Caroline; Leporé, Natasha; Pennec, Xavier; Lee, Agatha D.; Barysheva, Marina; Madsen, Sarah K.; Avedissian, Christina; Chou, Yi-Yu; de Zubicaray, Greig I.; McMahon, Katie; Wright, Margaret; Toga, Arthur W.; Thompson, Paul M.

    2010-01-01

    Genetic and environmental factors influence brain structure and function profoundly The search for heritable anatomical features and their influencing genes would be accelerated with detailed 3D maps showing the degree to which brain morphometry is genetically determined. As part of an MRI study that will scan 1150 twins, we applied Tensor-Based Morphometry to compute morphometric differences in 23 pairs of identical twins and 23 pairs of same-sex fraternal twins (mean age: 23.8 ± 1.8 SD years). All 92 twins’ 3D brain MRI scans were nonlinearly registered to a common space using a Riemannian fluid-based warping approach to compute volumetric differences across subjects. A multi-template method was used to improve volume quantification. Vector fields driving each subject’s anatomy onto the common template were analyzed to create maps of local volumetric excesses and deficits relative to the standard template. Using a new structural equation modeling method, we computed the voxelwise proportion of variance in volumes attributable to additive (A) or dominant (D) genetic factors versus shared environmental (C) or unique environmental factors (E). The method was also applied to various anatomical regions of interest (ROIs). As hypothesized, the overall volumes of the brain, basal ganglia, thalamus, and each lobe were under strong genetic control; local white matter volumes were mostly controlled by common environment. After adjusting for individual differences in overall brain scale, genetic influences were still relatively high in the corpus callosum and in early-maturing brain regions such as the occipital lobes, while environmental influences were greater in frontal brain regions which have a more protracted maturational time-course. PMID:19446645

  8. Automated Tumor Volumetry Using Computer-Aided Image Segmentation

    PubMed Central

    Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos

    2015-01-01

    Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633

  9. Developmental changes in hippocampal shape among preadolescent children.

    PubMed

    Lin, Muqing; Fwu, Peter T; Buss, Claudia; Davis, Elysia P; Head, Kevin; Muftuler, L Tugan; Sandman, Curt A; Su, Min-Ying

    2013-11-01

    It is known that the largest developmental changes in the hippocampus take place during the prenatal period and during the first two years of postnatal life. Few studies have been conducted to address the normal developmental trajectory of the hippocampus during childhood. In this study shape analysis was applied to study the normal developing hippocampus in a group of 103 typically developing 6- to 10-year-old preadolescent children. The individual brain was normalized to a template, and then the hippocampus was manually segmented and further divided into the head, body, and tail sub-regions. Three different methods were applied for hippocampal shape analysis: radial distance mapping, surface-based template registration using the robust point matching (RPM) algorithm, and volume-based template registration using the Demons algorithm. All three methods show that the older children have bilateral expanded head segments compared to the younger children. The results analyzed based on radial distance to the centerline were consistent with those analyzed using template-based registration methods. In analyses stratified by sex, it was found that the age-associated anatomical changes were similar in boys and girls, but the age-association was strongest in girls. Total hippocampal volume and sub-regional volumes analyzed using manual segmentation did not show a significant age-association. Our results suggest that shape analysis is sensitive to detect sub-regional differences that are not revealed in volumetric analysis. The three methods presented in this study may be applied in future studies to investigate the normal developmental trajectory of the hippocampus in children. They may be further applied to detect early deviations from the normal developmental trajectory in young children for evaluating susceptibility for psychopathological disorders involving hippocampus. Copyright © 2013 ISDN. Published by Elsevier Ltd. All rights reserved.

  10. Palm oil based nanofluids for enhancing heat transfer and rheological properties

    NASA Astrophysics Data System (ADS)

    Hussein, A. M.; Lingenthiran; Kadirgamma, K.; Noor, M. M.; Aik, L. K.

    2018-04-01

    Colloidal suspensions of nanomaterials size not more than 100 nm in basefluid are defined as nanofluids. The thermal and rheological properties study of oil based nanofluid is conducted to develop stable transformers palm oil based nanofluid. This paper describes the analysis techniques to determine the enhancement of thermal properties of nanofluids. Titanium dioxide (TiO2) has dispersed in the palm oil to prepare nanofluids with volume concentration (0.01-0.09) percentage. Both thermal conductivity and viscosity of nanofluid have measured by using the hot wire method and viscometer equipment respectively. Results indicate that the stable nanofluids improve the thermal properties as compared to palm oil. Results showed that the friction factor decreases as the Reynolds number increases and increases as the volume concentration increases. Additionally, the Nusselt number increases as the Reynolds number and volume concentration of the nanofluid increases.

  11. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  12. Accurate bulk density determination of irregularly shaped translucent and opaque aerogels

    NASA Astrophysics Data System (ADS)

    Petkov, M. P.; Jones, S. M.

    2016-05-01

    We present a volumetric method for accurate determination of bulk density of aerogels, calculated from extrapolated weight of the dry pure solid and volume estimates based on the Archimedes' principle of volume displacement, using packed 100 μm-sized monodispersed glass spheres as a "quasi-fluid" media. Hard particle packing theory is invoked to demonstrate the reproducibility of the apparent density of the quasi-fluid. Accuracy rivaling that of the refractive index method is demonstrated for both translucent and opaque aerogels with different absorptive properties, as well as for aerogels with regular and irregular shapes.

  13. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, Richard W.; Fuller, James L.; Doctor, Steven R.; Good, Morris S.; Heasler, Patrick G.; Skorpik, James R.; Hansen, Norman H.

    1995-01-01

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision.

  14. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  15. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, R.W.; Fuller, J.L.; Doctor, S.R.; Good, M.S.; Heasler, P.G.; Skorpik, J.R.; Hansen, N.H.

    1995-09-26

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision. 15 figs.

  16. SCREENING METHOD FOR NITROAROMATIC COMPOUNDS IN WATER BASED ON SOLID-PHASE MICROEXTRACTION AND INFRARED SPECTROSCOPY. (R825343)

    EPA Science Inventory

    A new method is described for determining nitroaromatic compounds in water
    that combines solid-phase microextraction (SPME) and infrared (IR) spectroscopy. In this method, the compounds are extracted from a 250-mL volume of water into a small square (3.2 cm ? 3.2 cm ? 61.2...

  17. Automated geometric optimization for robotic HIFU treatment of liver tumors.

    PubMed

    Williamson, Tom; Everitt, Scott; Chauhan, Sunita

    2018-05-01

    High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Comparison of three methods of sampling for endometrial cytology in the mare. Preliminary study.

    PubMed

    Defontis, M; Vaillancourt, D; Grand, F X

    2011-01-01

    This prospective study aims to compare three different sampling techniques for the collection of endometrial cytological specimens in the mare: the guarded culture swab, the uterine cytobrush and the low volume uterine flush. The study population consisted of six healthy Standardbred mares in dioestrus. In each mare an acute endometritis was induced by performing a low- volume uterine flush 6days after ovulation using a sterile isotonic solution (lactated Ringer's solution or ViGro™ Complete Flush Solution). Two days after initiating inflammation, samples were collected from each mare using the three compared techniques: the double guarded cotton swab, the uterine cytobrush and the low volume uterine flush. The cytological evaluation of the samples was based on following criteria: the quality and cellularity of the samples and the number of neutrophils recovered. The uterine cytobrush yielded slides of significantly (p=0.02) better quality than the low volume uterine flush. There was no significant difference between the cytobrush and the double guarded swab technique for the quality. There was no difference between techniques in the number of endometrial cells (p=0.55) and neutrophils recovered (p=0.28). Endometrial cytology is a practical method for the diagnosis of acute endometrial inflammation in the mare. Since no difference in the number of neutrophils was found between the three techniques, the choice of the sampling method should be based on other factors such as practicability, costs and disadvantages of each technique.

  19. Capillary ion chromatography with on-column focusing for ultra-trace analysis of methanesulfonate and inorganic anions in limited volume Antarctic ice core samples.

    PubMed

    Rodriguez, Estrella Sanz; Poynter, Sam; Curran, Mark; Haddad, Paul R; Shellie, Robert A; Nesterenko, Pavel N; Paull, Brett

    2015-08-28

    Preservation of ionic species within Antarctic ice yields a unique proxy record of the Earth's climate history. Studies have been focused until now on two proxies: the ionic components of sea salt aerosol and methanesulfonic acid. Measurement of the all of the major ionic species in ice core samples is typically carried out by ion chromatography. Former methods, whilst providing suitable detection limits, have been based upon off-column preconcentration techniques, requiring larger sample volumes, with potential for sample contamination and/or carryover. Here, a new capillary ion chromatography based analytical method has been developed for quantitative analysis of limited volume Antarctic ice core samples. The developed analytical protocol applies capillary ion chromatography (with suppressed conductivity detection) and direct on-column sample injection and focusing, thus eliminating the requirement for off-column sample preconcentration. This limits the total sample volume needed to 300μL per analysis, allowing for triplicate sample analysis with <1mL of sample. This new approach provides a reliable and robust analytical method for the simultaneous determination of organic and inorganic anions, including fluoride, methanesulfonate, chloride, sulfate and nitrate anions. Application to composite ice-core samples is demonstrated, with coupling of the capillary ion chromatograph to high resolution mass spectrometry used to confirm the presence and purity of the observed methanesulfonate peak. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. A Kriging based spatiotemporal approach for traffic volume data imputation

    PubMed Central

    Han, Lee D.; Liu, Xiaohan; Pu, Li; Chin, Shih-miao; Hwang, Ho-ling

    2018-01-01

    Along with the rapid development of Intelligent Transportation Systems, traffic data collection technologies have progressed fast. The emergence of innovative data collection technologies such as remote traffic microwave sensor, Bluetooth sensor, GPS-based floating car method, and automated license plate recognition, has significantly increased the variety and volume of traffic data. Despite the development of these technologies, the missing data issue is still a problem that poses great challenge for data based applications such as traffic forecasting, real-time incident detection, dynamic route guidance, and massive evacuation optimization. A thorough literature review suggests most current imputation models either focus on the temporal nature of the traffic data and fail to consider the spatial information of neighboring locations or assume the data follow a certain distribution. These two issues reduce the imputation accuracy and limit the use of the corresponding imputation methods respectively. As a result, this paper presents a Kriging based data imputation approach that is able to fully utilize the spatiotemporal correlation in the traffic data and that does not assume the data follow any distribution. A set of scenarios with different missing rates are used to evaluate the performance of the proposed method. The performance of the proposed method was compared with that of two other widely used methods, historical average and K-nearest neighborhood. Comparison results indicate that the proposed method has the highest imputation accuracy and is more flexible compared to other methods. PMID:29664928

  1. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  2. Two volume integral equations for the inhomogeneous and anisotropic forward problem in electroencephalography

    NASA Astrophysics Data System (ADS)

    Rahmouni, Lyes; Mitharwal, Rajendra; Andriulli, Francesco P.

    2017-11-01

    This work presents two new volume integral equations for the Electroencephalography (EEG) forward problem which, differently from the standard integral approaches in the domain, can handle heterogeneities and anisotropies of the head/brain conductivity profiles. The new formulations translate to the quasi-static regime some volume integral equation strategies that have been successfully applied to high frequency electromagnetic scattering problems. This has been obtained by extending, to the volume case, the two classical surface integral formulations used in EEG imaging and by introducing an extra surface equation, in addition to the volume ones, to properly handle boundary conditions. Numerical results corroborate theoretical treatments, showing the competitiveness of our new schemes over existing techniques and qualifying them as a valid alternative to differential equation based methods.

  3. Application of machine learning methods to describe the effects of conjugated equine estrogens therapy on region-specific brain volumes.

    PubMed

    Casanova, Ramon; Espeland, Mark A; Goveas, Joseph S; Davatzikos, Christos; Gaussoin, Sarah A; Maldjian, Joseph A; Brunner, Robert L; Kuller, Lewis H; Johnson, Karen C; Mysiw, W Jerry; Wagner, Benjamin; Resnick, Susan M

    2011-05-01

    Use of conjugated equine estrogens (CEE) has been linked to smaller regional brain volumes in women aged ≥65 years; however, it is unknown whether this results in a broad-based characteristic pattern of effects. Structural magnetic resonance imaging was used to assess regional volumes of normal tissue and ischemic lesions among 513 women who had been enrolled in a randomized clinical trial of CEE therapy for an average of 6.6 years, beginning at ages 65-80 years. A multivariate pattern analysis, based on a machine learning technique that combined Random Forest and logistic regression with L(1) penalty, was applied to identify patterns among regional volumes associated with therapy and whether patterns discriminate between treatment groups. The multivariate pattern analysis detected smaller regional volumes of normal tissue within the limbic and temporal lobes among women that had been assigned to CEE therapy. Mean decrements ranged as high as 7% in the left entorhinal cortex and 5% in the left perirhinal cortex, which exceeded the effect sizes reported previously in frontal lobe and hippocampus. Overall accuracy of classification based on these patterns, however, was projected to be only 54.5%. Prescription of CEE therapy for an average of 6.6 years is associated with lower regional brain volumes, but it does not induce a characteristic spatial pattern of changes in brain volumes of sufficient magnitude to discriminate users and nonusers. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Application of machine learning methods to describe the effects of conjugated equine estrogens therapy on region-specific brain volumes

    PubMed Central

    Casanova, Ramon; Espeland, Mark A.; Goveas, Joseph S.; Davatzikos, Christos; Gaussoin, Sarah A.; Maldjian, Joseph A.; Brunner, Robert L.; Kuller, Lewis H.; Johnson, Karen C.; Mysiw, W. Jerry; Wagner, Benjamin; Resnick, Susan M.

    2011-01-01

    Use of conjugated equine estrogens (CEE) has been linked to smaller regional brain volumes in women aged ≥65 years, however it is unknown whether this results in a broad-based characteristic pattern of effects. Structural MRI was used to assess regional volumes of normal tissue and ischemic lesions among 513 women who had been enrolled in a randomized clinical trial of CEE therapy for an average of 6.6 years, beginning at ages 65-80 years. A multivariate pattern analysis, based on a machine learning technique that combined Random Forest and logistic regression with L1 penalty, was applied to identify patterns among regional volumes associated with therapy and whether patterns discriminate between treatment groups. The multivariate pattern analysis detected smaller regional volumes of normal tissue within the limbic and temporal lobes among women that had been assigned to CEE therapy. Mean decrements ranged as high as 7% in the left entorhinal cortex and 5% in the left perirhinal cortex, which exceeded the effect sizes reported previously in frontal lobe and hippocampus. Overall accuracy of classification based on these patterns, however, was projected to be only 54.5%. Prescription of CEE therapy for an average of 6.6 years is associated with lower regional brain volumes, but it does not induce a characteristic spatial pattern of changes in brain volumes of sufficient magnitude to discriminate users and non-users. PMID:21292420

  5. [Physical and mechanical properties of the thermosetting resin for crown and bridge cured by micro-wave heating].

    PubMed

    Kaneko, K

    1989-09-01

    A heating method using micro-waves was utilized to obtain strong thermosetting resin for crown and bridge. The physical and mechanical properties of the thermosetting resin were examined. The resin was cured in a shorter time by the micro-waves heating method than by the conventional heat curing method and the working time was reduced markedly. The base resins of the thermosetting resin for crown and bridge for the micro-waves heating method were 2 PA and diluent 3 G. A compounding volume of 30 wt% for diluent 3 G was considered good the results of compressive strength, bending strength and diametral tensile strength. Grams of 200-230 of the filler compounded to the base resins of 2 PA-3 G system provided optimal compressive strength, bending strength and diametral tensile strength. A filler gram of 230 provided optimal hardness and curing shrinkage rate, the coefficient of thermal expansion became smaller with the increase of the compounding volume of the filler. The trial thermosetting resin for crown and bridge formed by the micro-waves heating method was not inferior to the conventional resin by the heat curing method or the light curing method.

  6. Detecting vapour bubbles in simulations of metastable water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González, Miguel A.; Abascal, Jose L. F.; Valeriani, Chantal, E-mail: christoph.dellago@univie.ac.at, E-mail: cvaleriani@quim.ucm.es

    2014-11-14

    The investigation of cavitation in metastable liquids with molecular simulations requires an appropriate definition of the volume of the vapour bubble forming within the metastable liquid phase. Commonly used approaches for bubble detection exhibit two significant flaws: first, when applied to water they often identify the voids within the hydrogen bond network as bubbles thus masking the signature of emerging bubbles and, second, they lack thermodynamic consistency. Here, we present two grid-based methods, the M-method and the V-method, to detect bubbles in metastable water specifically designed to address these shortcomings. The M-method incorporates information about neighbouring grid cells to distinguishmore » between liquid- and vapour-like cells, which allows for a very sensitive detection of small bubbles and high spatial resolution of the detected bubbles. The V-method is calibrated such that its estimates for the bubble volume correspond to the average change in system volume and are thus thermodynamically consistent. Both methods are computationally inexpensive such that they can be used in molecular dynamics and Monte Carlo simulations of cavitation. We illustrate them by computing the free energy barrier and the size of the critical bubble for cavitation in water at negative pressure.« less

  7. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kida, S; University of Tokyo Hospital, Bunkyo, Tokyo; Bal, M

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (amore » surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation image-based plans, providing evidence to use 4D-CT ventilation imaging for clinical applications. Supported in part by Free to Breathe Young Investigator Research Grant and NIH/NCI R01 CA 093626. The authors thank Philips Radiation Oncology Systems for the Pinnacle3 treatment planning systems.« less

  8. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X.; Buscheck, T. A.; Mansoor, K.

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO 2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MTmore » response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO 2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO 2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.« less

  9. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  10. Three-Dimensional Eyeball and Orbit Volume Modification After LeFort III Midface Distraction.

    PubMed

    Smektala, Tomasz; Nysjö, Johan; Thor, Andreas; Homik, Aleksandra; Sporniak-Tutak, Katarzyna; Safranow, Krzysztof; Dowgierd, Krzysztof; Olszewski, Raphael

    2015-07-01

    The aim of our study was to evaluate orbital volume modification with LeFort III midface distraction in patients with craniosynostosis and its influence on eyeball volume and axial diameter modification. Orbital volume was assessed by the semiautomatic segmentation method based on deformable surface models and on 3-dimensional (3D) interaction with haptics. The eyeball volumes and diameters were automatically calculated after manual segmentation of computed tomographic scans with 3D slicer software. The mean, minimal, and maximal differences as well as the standard deviation and intraclass correlation coefficient (ICC) for intraobserver and interobserver measurements reliability were calculated. The Wilcoxon signed rank test was used to compare measured values before and after surgery. P < 0.05 was considered statistically significant. Intraobserver and interobserver ICC for haptic-aided semiautomatic orbital volume measurements were 0.98 and 0.99, respectively. The intraobserver and interobserver ICC values for manual segmentation of the eyeball volume were 0.87 and 0.86, respectively. The orbital volume increased significantly after surgery: 30.32% (mean, 5.96  mL) for the left orbit and 31.04% (mean, 6.31  mL) for the right orbit. The mean increase in eyeball volume was 12.3%. The mean increases in the eyeball axial dimensions were 7.3%, 9.3%, and 4.4% for the X-, Y-, and Z-axes, respectively. The Wilcoxon signed rank test showed that preoperative and postoperative eyeball volumes, as well as the diameters along the X- and Y-axes, were statistically significant. Midface distraction in patients with syndromic craniostenosis results in a significant increase (P < 0.05) in the orbit and eyeball volumes. The 2 methods (haptic-aided semiautomatic segmentation and manual 3D slicer segmentation) are reproducible techniques for orbit and eyeball volume measurements.

  11. An improved distance-to-dose correlation for predicting bladder and rectum dose-volumes in knowledge-based VMAT planning for prostate cancer

    NASA Astrophysics Data System (ADS)

    Wall, Phillip D. H.; Carver, Robert L.; Fontenot, Jonas D.

    2018-01-01

    The overlap volume histogram (OVH) is an anatomical metric commonly used to quantify the geometric relationship between an organ at risk (OAR) and target volume when predicting expected dose-volumes in knowledge-based planning (KBP). This work investigated the influence of additional variables contributing to variations in the assumed linear DVH-OVH correlation for the bladder and rectum in VMAT plans of prostate patients, with the goal of increasing prediction accuracy and achievability of knowledge-based planning methods. VMAT plans were retrospectively generated for 124 prostate patients using multi-criteria optimization. DVHs quantified patient dosimetric data while OVHs quantified patient anatomical information. The DVH-OVH correlations were calculated for fractional bladder and rectum volumes of 30, 50, 65, and 80%. Correlations between potential influencing factors and dose were quantified using the Pearson product-moment correlation coefficient (R). Factors analyzed included the derivative of the OVH, prescribed dose, PTV volume, bladder volume, rectum volume, and in-field OAR volume. Out of the selected factors, only the in-field bladder volume (mean R  =  0.86) showed a strong correlation with bladder doses. Similarly, only the in-field rectal volume (mean R  =  0.76) showed a strong correlation with rectal doses. Therefore, an OVH formalism accounting for in-field OAR volumes was developed to determine the extent to which it improved the DVH-OVH correlation. Including the in-field factor improved the DVH-OVH correlation, with the mean R values over the fractional volumes studied improving from  -0.79 to  -0.85 and  -0.82 to  -0.86 for the bladder and rectum, respectively. A re-planning study was performed on 31 randomly selected database patients to verify the increased accuracy of KBP dose predictions by accounting for bladder and rectum volume within treatment fields. The in-field OVH led to significantly more precise and fewer unachievable KBP predictions, especially for lower bladder and rectum dose-volumes.

  12. Photon activation-15O decay studies of tumor blood flow.

    PubMed

    Ten Haken, R K; Nussbaum, G H; Emami, B; Hughes, W L

    1981-01-01

    A direct, noninvasive method for measuring absolute values of specific capillary blood flow in living tissue is described. The method is based on the photon activation, in situ, of tissue elements and the measurement of the subsequent decay of the positron activity induced, employing coincidence detection of the photon pairs produced in positron annihilation. Analysis of the time-dependent coincidence spectrum reveals the contribution to the total signal from the decay of 15O, from which the specific capillary blood flow in the imaged, activated volume is ultimately determined. By virtue of its introduction of the radioisotope of interest (15O) directly and uniformly into the tissue volume under investigation, the method described permits both the nonperfused and well perfused fractions of an activated volume to be estimated and hence, the average specific blood flow within imaged tumor volumes to be computed. The model employed to describe and analyze the data is discussed in detail. Results of application of the technique to measurement of specific blood flow in rhabdomyosarcoma tumors grown in WAG/Rij rats are presented and discussed. The method is shown to be reliable and well suited to studies designed to determined the effects of various agents, such as heat, radiation and drugs, on tumor blood flow.

  13. Assessment of growth dynamics of human cranium middle fossa in foetal period.

    PubMed

    Skomra, Andrzej; Kędzia, Alicja; Dudek, Krzysztof; Bogacz, Wiesław

    2014-01-01

    Available literature analysis demonstrated smallness of studies of cranial base. The goal of the study was to analyse the medial fossa of the human cranium in the foetal period against other fossae. Survey material consisted of 110 human foetuses at a morphological age of 16-28 weeks of foetal life, CRL 98-220 mm. Anthropological, preparation method, reverse method and statistical analysis were utilized. The survey incorporated the following computer programmes: Renishaw, TraceSurf, AutoCAD, CATIA. The reverse method seems especially interesting (impression with polysiloxane (silicone elastomer of high adhesive power used in dentistry) with 18 D 4823 activator. Elicited impression accurately reflected complex shape of cranium base. On assessing the relative rate of cranium medial fossa, the rate was found to be stable (linear model) for the whole of the analysed period and is 0.19%/week, which stands for the gradual and steady growth of the middle fossa in relation to the whole of the cranium base. At the same time, from the 16th till 28th week of foetal life, relative volume of the cranium middle fossa increases more intensively than cranium anterior fossa, whereas the cranium middle fossa volume as compared with the cranium posterior fossa is definitely slower. In the analysed period, the growth rate of the cranium base middle fossa was bigger in the 4th and 5th weeks than in the 6th and 7th weeks of foetal life. The investigations revealed cranium base asymmetry of the left side. Furthermore, the anterior fossae volume on the left side is significantly bigger than the one of the fossae on the right side. Volume growth rate is more intensive in the 4th and 5th than in the 6th and 7th weeks of foetal life. In the examined period, the relative growth rate of cranium base middle fossa is 0.19%/week and it is stable - linear model. The study revealed correlations in the form of mathematical models, which enabled foetuses age assessment.

  14. Quantification of γ-aminobutyric acid (GABA) in 1 H MRS volumes composed heterogeneously of grey and white matter.

    PubMed

    Mikkelsen, Mark; Singh, Krish D; Brealy, Jennifer A; Linden, David E J; Evans, C John

    2016-11-01

    The quantification of γ-aminobutyric acid (GABA) concentration using localised MRS suffers from partial volume effects related to differences in the intrinsic concentration of GABA in grey (GM) and white (WM) matter. These differences can be represented as a ratio between intrinsic GABA in GM and WM: r M . Individual differences in GM tissue volume can therefore potentially drive apparent concentration differences. Here, a quantification method that corrects for these effects is formulated and empirically validated. Quantification using tissue water as an internal concentration reference has been described previously. Partial volume effects attributed to r M can be accounted for by incorporating into this established method an additional multiplicative correction factor based on measured or literature values of r M weighted by the proportion of GM and WM within tissue-segmented MRS volumes. Simulations were performed to test the sensitivity of this correction using different assumptions of r M taken from previous studies. The tissue correction method was then validated by applying it to an independent dataset of in vivo GABA measurements using an empirically measured value of r M . It was shown that incorrect assumptions of r M can lead to overcorrection and inflation of GABA concentration measurements quantified in volumes composed predominantly of WM. For the independent dataset, GABA concentration was linearly related to GM tissue volume when only the water signal was corrected for partial volume effects. Performing a full correction that additionally accounts for partial volume effects ascribed to r M successfully removed this dependence. With an appropriate assumption of the ratio of intrinsic GABA concentration in GM and WM, GABA measurements can be corrected for partial volume effects, potentially leading to a reduction in between-participant variance, increased power in statistical tests and better discriminability of true effects. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  16. Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation

    PubMed Central

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2015-01-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  17. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  18. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  19. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    2013-12-15

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  20. Sector-Based Detection for Hands-Free Speech Enhancement in Cars

    NASA Astrophysics Data System (ADS)

    Lathoud, Guillaume; Bourgeois, Julien; Freudenberger, Jürgen

    2006-12-01

    Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with[InlineEquation not available: see fulltext.] km/h background road noise.

  1. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer

    NASA Astrophysics Data System (ADS)

    Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-01

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  2. Intra-patient semi-automated segmentation of the cervix-uterus in CT-images for adaptive radiotherapy of cervical cancer.

    PubMed

    Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben

    2013-08-07

    For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.

  3. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  4. Brain Volume Estimation Enhancement by Morphological Image Processing Tools.

    PubMed

    Zeinali, R; Keshtkar, A; Zamani, A; Gharehaghaji, N

    2017-12-01

    Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI) is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE) was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters). By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.

  5. Impact of Strategically Located White Matter Hyperintensities on Cognition in Memory Clinic Patients with Small Vessel Disease

    PubMed Central

    Hilal, Saima; Kuijf, Hugo J.; Ikram, Mohammad Kamran; Xu, Xin; Tan, Boon Yeow; Venketasubramanian, Narayanaswamy; Postma, Albert; Biessels, Geert Jan; Chen, Christopher P. L. H.

    2016-01-01

    Background and Purpose Studies on the impact of small vessel disease (SVD) on cognition generally focus on white matter hyperintensity (WMH) volume. The extent to which WMH location relates to cognitive performance has received less attention, but is likely to be functionally important. We examined the relation between WMH location and cognition in a memory clinic cohort of patients with sporadic SVD. Methods A total of 167 patients with SVD were recruited from memory clinics. Assumption-free region of interest-based analyses based on major white matter tracts and voxel-wise analyses were used to determine the association between WMH location and executive functioning, visuomotor speed and memory. Results Region of interest-based analyses showed that WMHs located particularly within the anterior thalamic radiation and forceps minor were inversely associated with both executive functioning and visuomotor speed, independent of total WMH volume. Memory was significantly associated with WMH volume in the forceps minor, independent of total WMH volume. An independent assumption-free voxel-wise analysis identified strategic voxels in these same tracts. Region of interest-based analyses showed that WMH volume within the anterior thalamic radiation explained 6.8% of variance in executive functioning, compared to 3.9% for total WMH volume; WMH volume within the forceps minor explained 4.6% of variance in visuomotor speed and 4.2% of variance in memory, compared to 1.8% and 1.3% respectively for total WMH volume. Conclusions Our findings identify the anterior thalamic radiation and forceps minor as strategic white matter tracts in which WMHs are most strongly associated with cognitive impairment in memory clinic patients with SVD. WMH volumes in individual tracts explained more variance in cognition than total WMH burden, emphasizing the importance of lesion location when addressing the functional consequences of WMHs. PMID:27824925

  6. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging

    NASA Astrophysics Data System (ADS)

    Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md

    2011-10-01

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  7. Characterization of Chronic Aortic and Mitral Regurgitation Undergoing Valve Surgery Using Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Gao, Sinsia A; Johnsson, Åse A; Lagerstrand, Kerstin M; Bech-Hanssen, Odd

    2017-06-15

    Grading of chronic aortic regurgitation (AR) and mitral regurgitation (MR) by cardiovascular magnetic resonance (CMR) is currently based on thresholds, which are neither modality nor quantification method specific. Accordingly, this study sought to identify CMR-specific and quantification method-specific thresholds for regurgitant volumes (RVols), RVol indexes, and regurgitant fractions (RFs), which denote severe chronic AR or MR with an indication for surgery. The study comprised patients with moderate and severe chronic AR (n = 38) and MR (n = 40). Echocardiography and CMR was performed at baseline and in all operated AR/MR patients (n = 23/25) 10 ± 1 months after surgery. CMR quantification of AR: direct (aortic flow) and indirect method (left ventricular stroke volume [LVSV] - pulmonary stroke volume [PuSV]); MR: 2 indirect methods (LVSV - aortic forward flow [AoFF]; mitral inflow [MiIF] - AoFF). All operated patients had severe regurgitation and benefited from surgery, indicated by a significant postsurgical reduction in end-diastolic volume index and improvement or relief of symptoms. The discriminatory ability between moderate and severe AR was strong for RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (direct method) and RVol >62 ml, RVol index >31 ml/m 2 , and RF >36% (LVSV-PuSV) with a negative likelihood ratio ≤ 0.2. In MR, the discriminatory ability was very strong for RVol >64 ml, RVol index >32 ml/m 2 , and RF >41% (LVSV-AoFF) and RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (MiIF-AoFF) with a negative likelihood ratio < 0.1. In conclusion, CMR grading of chronic AR and MR should be based on modality-specific and quantification method-specific thresholds, as they differ largely from recognized guideline criteria, to assure appropriate clinical decision-making and timing of surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. AFM-porosimetry: density and pore volume measurements of particulate materials.

    PubMed

    Sörensen, Malin H; Valle-Delgado, Juan J; Corkery, Robert W; Rutland, Mark W; Alberius, Peter C

    2008-06-01

    We introduced the novel technique of AFM-porosimetry and applied it to measure the total pore volume of porous particles with a spherical geometry. The methodology is based on using an atomic force microscope as a balance to measure masses of individual particles. Several particles within the same batch were measured, and by plotting particle mass versus particle volume, the bulk density of the sample can be extracted from the slope of the linear fit. The pore volume is then calculated from the densities of the bulk and matrix materials, respectively. In contrast to nitrogen sorption and mercury porosimetry, this method is capable of measuring the total pore volume regardless of pore size distribution and pore connectivity. In this study, three porous samples were investigated by AFM-porosimetry: one ordered mesoporous sample and two disordered foam structures. All samples were based on a matrix of amorphous silica templated by a block copolymer, Pluronic F127, swollen to various degrees with poly(propylene glycol). In addition, the density of silica spheres without a template was measured by two independent techniques: AFM and the Archimedes principle.

  9. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  10. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  11. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.

  12. Automated tumor volumetry using computer-aided image segmentation.

    PubMed

    Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos

    2015-05-01

    Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  13. Automated volumetric lung segmentation of thoracic CT images using fully convolutional neural network

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Beymer, David; Syeda-Mahmood, Tanveer

    2018-02-01

    Deep Learning models such as Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in 2D medical image analysis. In clinical practice; however, most analyzed and acquired medical data are formed of 3D volumes. In this paper, we present a fast and efficient 3D lung segmentation method based on V-net: a purely volumetric fully CNN. Our model is trained on chest CT images through volume to volume learning, which palliates overfitting problem on limited number of annotated training data. Adopting a pre-processing step and training an objective function based on Dice coefficient addresses the imbalance between the number of lung voxels against that of background. We have leveraged Vnet model by using batch normalization for training which enables us to use higher learning rate and accelerates the training of the model. To address the inadequacy of training data and obtain better robustness, we augment the data applying random linear and non-linear transformations. Experimental results on two challenging medical image data show that our proposed method achieved competitive result with a much faster speed.

  14. Improvable method for Halon 1301 concentration measurement based on infrared absorption

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Lu, Song; Guan, Yu

    2015-09-01

    Halon 1301 has attached much interest because of its pervasive use as an effective fire suppressant agent in aircraft related fires, and the study of fire suppressant agent concentration measurement is especially of interest. In this work, a Halon 1301 concentration measurement method based on the Beer-Lambert law is developed. IR light is transmitted through mixed gas, and the light intensity with and without the agent present is measured. The intensity ratio is a function of the volume percentage of Halon 1301, and the voltage output of the detector is proportional to light intensity. As such, the relationship between the volume percentage and voltage ratio can be established. The concentration measurement system shows a relative error of the system less than ±2.50%, and a full scale error within 1.20%. This work also discusses the effect of temperature and relative humidity (RH) on the calibration. The experimental results of voltage ratio versus Halon 1301 volume percentage relationship show that the voltage ratio drops significantly as temperature rises from 25 to 100 °C, and it decreases as RH rises from 0% to 100%.

  15. Age accounts for racial differences in ischemic stroke volume in a population-based study.

    PubMed

    Zakaria, Tarek; Lindsell, Christopher J; Kleindorfer, Dawn; Alwell, Kathleen; Moomaw, Charles J; Woo, Daniel; Szaflarski, Jerzy P; Khoury, Jane; Miller, Rosie; Broderick, Joseph P; Kissela, Brett

    2008-01-01

    The stroke volume among black ischemic stroke patients in phase I of the population-based Greater Cincinnati/Northern Kentucky Stroke Study (GCNKSS) was smaller than reported among acute stroke studies, with a median stroke volume of 2.5 cm. However, it is not known if stroke volume was similar between black and white patients within the same study population. Phase II of the GCNKSS identified all ischemic strokes between July 1993 and June 1994. The stroke volume was estimated by study physicians using the modified ellipsoid method. Analysis of stroke volume by race, sex and age was performed for strokes with a measurable lesion of >or=0.5 cm(3). Among verified cases of ischemic stroke, 334 patients were eligible for this analysis. There were 191 whites (57%) and 143 blacks (43%). The mean age was 69.4 years. The median stroke volume for all patients was 8.8 cm(3) (range 0.5-540), with a mean of 36.4 cm(3). Stroke volume was not different between men and women, and it tended to increase with age. Although stroke volume was significantly higher among whites, age was a confounding factor. Subsequent analysis of stroke volume stratified by age showed no difference between blacks and whites in any age group. Our data show that most ischemic stroke lesions, regardless of the race, are of small size, and this may be an important reason for the low percentage of strokes treated currently with tissue-type plasminogen activator. The association of age with stroke volume requires further study. Copyright 2008 S. Karger AG, Basel.

  16. 78 FR 69711 - Change in Postal Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    .... Domestic Extra Services. Premium Forwarding Service prices increase slightly, and a new pricing option is... volume threshold for cubic pricing and other Commercial Plus offerings are decreased to 50,000 packages... methods for which GXG Commercial Base and PMEI Commercial Base pricing is available; the establishment of...

  17. Recommendations for Improving Identification and Quantification in Non-Targeted, GC-MS-Based Metabolomic Profiling of Human Plasma

    PubMed Central

    Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.

    2017-01-01

    The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195

  18. Bone volume-to-total volume ratio measured in trabecular bone by single-sided NMR devices.

    PubMed

    Brizi, Leonardo; Barbieri, Marco; Baruffaldi, Fabio; Bortolotti, Villiam; Fersini, Chiara; Liu, Huabing; Nogueira d'Eurydice, Marcel; Obruchkov, Sergei; Zong, Fangrong; Galvosas, Petrik; Fantazzini, Paola

    2018-01-01

    Reduced bone strength is associated with a loss of bone mass, usually evaluated by dual-energy X-ray absorptiometry, although it is known that the bone microstructure also affects the bone strength. Here, a method is proposed to measure (in laboratory) the bone volume-to-total volume ratio by single-sided NMR scanners, which is related to the microstructure of the trabecular bone. Three single-sided scanners were used on animal bone samples. These low-field, mobile, low-cost devices are able to detect the NMR signal, regardless of the sample sizes, without the use of ionizing radiations, with the further advantage of signal localization offered by their intrinsic magnetic field gradients. The performance of the different single-sided scanners have been discussed. The results have been compared with bone volume-to-total volume ratio by micro CT and MRI, obtaining consistent values. Our results demonstrate the feasibility of the method for laboratory analyses, which are useful for measurements like porosity on bone specimens. This can be considered as the first step to develop an NMR method based on the use of a mobile single-sided device, for the diagnosis of osteoporosis, through the acquisition of the signal from the appendicular skeleton, allowing for low-cost, wide screening campaigns. Magn Reson Med 79:501-510, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    PubMed

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  20. SU-E-QI-12: Morphometry Based Measurements of the Structural Response to Whole Brain Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuentes, D; Castillo, R; Castillo, E

    2014-06-15

    Purpose: Although state of the art radiation therapy techniques for treating intracranial malignancies have eliminated acute brain injury, cognitive impairment occurs in 50–90% of patients who survive >6mo post irradiation. Quantitative characterization of therapy response is needed to facilitate therapeutic strategies to minimize radiation induced cognitive impairment [1]. Deformation based morphometry techniques [2, 3] are presented as a quantitative imaging biomarker of therapy response in patients receiving whole brain radiation for treating medulloblastoma. Methods: Post-irradiation magnetic resonance imaging (MRI) data sets were retrospectively analyzed in N=15 patients, >60 MR image datasets. As seen in Fig 1(a), volume changes at multiplemore » time points post-irradiation were quantitatively measured in the cerebrum and ventricles with respect to pre-irradiation MRI. A high resolution image Template, was registered to the pre-irradiation MRI of each patient to create a brain atlas for the cerebrum, cerebellum, and ventricles. Skull stripped images for each patient were registered to the initial pre-treatment scan. Average volume changes in the labeled regions were measured using the determinant of the displacement field Jacobian. Results: Longitudinal measurements, Fig 1(b-c), show a negative correlation p=.06, of the cerebral volume change with the time interval from irradiation. A corresponding positive correlation, p=.01, between ventricular volume change and time interval from irradiation is seen. One sample t-test for correlations were computed using a Spearman method. An average decrease in cerebral volume, p=.08, and increase in ventricular volume, p<.001, was observed. The radiation dose was seen directly proportional to the induced volume changes in the cerebrum, r=−.44, p<.001, Fig 1(d). Conclusion: Results indicate that morphometric monitoring of brain tissue volume changes may potentially be used to quantitatively assess toxicity and response to radiation and may provide insight in developing new therapeutic approaches and monitoring efficacy.« less

Top