Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.
Manjón, José V; Tohka, Jussi; Robles, Montserrat
2010-11-01
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting
2017-05-01
Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.
Measurements of evaporated perfluorocarbon during partial liquid ventilation by a zeolite absorber.
Proquitté, Hans; Rüdiger, Mario; Wauer, Roland R; Schmalisch, Gerd
2004-01-01
During partial liquid ventilation (PLV) the knowledge of the quantity of exhaled perfluorocarbon (PFC) allows a continuous substitution of the PFC loss to achieve a constant PFC level in the lungs. The aim of our in vitro study was to determine the PFC loss in the mixed expired gas by an absorber and to investigate the effect of the evaporated PFC on ventilatory measurements. To simulate the PFC loss during PLV, a heated flask was rinsed with a constant airflow of 4 L min(-1) and PFC was infused by different speeds (5, 10, 20 mL h(-1)). An absorber filled with PFC selective zeolites was connected with the flask to measure the PFC in the gas. The evaporated PFC volume and the PFC concentration were determined from the weight gain of the absorber measured by an electronic scale. The PFC-dependent volume error of the CO2SMO plus neonatal pneumotachograph was measured by manual movements of a syringe with volumes of 10 and 28 mL with a rate of 30 min(-1). Under steady state conditions there was a strong correlation (r2 = 0.999) between the infusion speed of PFC and the calculated PFC flow rate. The PFC flow rate was slightly underestimated by 4.3% (p < 0.01). However, this bias was independent from PFC infusion rate. The evaporated PFC volume was precisely measured with errors < 1%. The volume error of the CO2SMO-Plus pneumotachograph increased with increasing PFC content for both tidal volumes (p < 0.01). However for PFC flow rates up to 20 mL/h the error of the measured tidal volumes was < 5%. PFC selective zeolites can be used to quantify accurately the evaporated PFC volume during PLV. With increasing PFC concentrations in the exhaled air the measurement errors of ventilatory parameters have to be taken into account.
The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques
NASA Astrophysics Data System (ADS)
Tang, Chao
Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
What approach to brain partial volume correction is best for PET/MRI?
NASA Astrophysics Data System (ADS)
Hutton, B. F.; Thomas, B. A.; Erlandsson, K.; Bousse, A.; Reilhac-Laborde, A.; Kazantsev, D.; Pedemonte, S.; Vunckx, K.; Arridge, S. R.; Ourselin, S.
2013-02-01
Many partial volume correction approaches make use of anatomical information, readily available in PET/MRI systems but it is not clear what approach is best. Seven novel approaches to partial volume correction were evaluated, including several post-reconstruction methods and several reconstruction methods that incorporate anatomical information. These were compared with an MRI-independent approach (reblurred van Cittert ) and uncorrected data. Monte Carlo PET data were generated for activity distributions representing both 18F FDG and amyloid tracer uptake. Post-reconstruction methods provided the best recovery with ideal segmentation but were particularly sensitive to mis-registration. Alternative approaches performed better in maintaining lesion contrast (unseen in MRI) with good noise control. These were also relatively insensitive to mis-registration errors. The choice of method will depend on the specific application and reliability of segmentation and registration algorithms.
Davies, M W; Dunster, K R
2000-08-01
During partial liquid ventilation perfluorocarbon vapour is present in the exhaled gases. The volumes of these gases are measured by pneumotachometers. Error in measuring tidal volumes will give erroneous measurement of lung compliance during partial liquid ventilation. We aim to compare measured tidal volumes with and without perfluorocarbon vapour using tidal volumes suitable for use in neonates. Tidal volumes were produced with a 100 ml calibration syringe from 20 to 100 ml and with a calibrated Harvard rodent ventilator from 2.5 to 20 ml. Control tidal volumes were drawn from a humidifier chamber containing water vapour and the PFC tidal volumes were drawn from a humidifier chamber containing water and perfluorocarbon (FC-77) vapour. Tidal volumes were measured by a fixed orifice, target, differential pressure flowmeter (VenTrak) or a hot-wire anenometer (Bear Cub) placed between the calibration syringe or ventilator and the humidifier chamber. All tidal volumes measured with perfluorocarbon vapour were increased compared with control (ANOVA p < 0.001 and post t-test p < 0.0001). Measured tidal volume increased from 7 to 16% with the fixed orifice type flow-meter, and from 35 to 41% with the hot-wire type. In conclusion, perfluorocarbon vapour flowing through pneumotachometers gives falsely high tidal volume measurements. Calculation of lung compliance must take into account the effect of perfluorocarbon vapour on the measurement of tidal volume.
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Determination of partial molar volumes from free energy perturbation theory†
Vilseck, Jonah Z.; Tirado-Rives, Julian
2016-01-01
Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood–Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm3 mol−1. The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute–solvent interactions. PMID:25589343
Determination of partial molar volumes from free energy perturbation theory.
Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L
2015-04-07
Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood-Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm(3) mol(-1). The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute-solvent interactions.
Li, Y; Zhong, R; Wang, X; Ai, P; Henderson, F; Chen, N; Luo, F
2017-04-01
To test if active breath control during cone-beam computed tomography (CBCT) could improve planning target volume during accelerated partial breast radiotherapy for breast cancer. Patients who were more than 40 years old, underwent breast-conserving dissection and planned for accelerated partial breast irradiation, and with postoperative staging limited to T1-2 N0 M0, or postoperative staging T2 lesion no larger than 3cm with a negative surgical margin greater than 2mm were enrolled. Patients with lobular carcinoma or extensive ductal carcinoma in situ were excluded. CBCT images were obtained pre-correction, post-correction and post-treatment. Set-up errors were recorded at left-right, anterior-posterior and superior-inferior directions. The differences between these CBCT images, as well as calculated radiation doses, were compared between patients with active breath control or free breathing. Forty patients were enrolled, among them 25 had active breath control. A total of 836 CBCT images were obtained for analysis. CBCT significantly reduced planning target volume. However, active breath control did not show significant benefit in decreasing planning target volume margin and the doses of organ-at-risk when compared to free breathing. CBCT, but not active breath control, could reduce planning target volume during accelerated partial breast irradiation. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
The effect of monetary punishment on error evaluation in a Go/No-go task.
Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki
2017-10-01
Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael
2012-02-01
Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.
A theory for predicting composite laminate warpage resulting from fabrication
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1974-01-01
Linear laminate theory is used with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Composite micro- and macrohyphenmechanics are used with laminate theory to assess the contribution of factors such as ply misorientation, fiber migration, and fiber and/or void volume ratio nonuniformity on the laminate warpage. Using these equations, it was found that a 1 deg error in the orientation angle of one ply was sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8 ply Mod-I/epoxy. Using a sensitivity analysis on the governing parameters, it was found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.
Tactically Extensible and Modular Communications - X-Band TEMCOM-X
NASA Technical Reports Server (NTRS)
Sims, William Herbert; Varnavas, Kosta A.; Casas, Joseph; Spehn, Stephen L.; Kendrick, Neal; Cross, Stephen; Sanderson, Paul; Booth, Janet C.
2015-01-01
This paper will discuss a proposed CubeSat size (3U) telemetry system concept being developed at Marshall Space Flight Center (MSFC) in cooperation with the U.S. Department of the Army and Dynetics Corporation. This telemetry system incorporates efficient, high-bandwidth communications by developing flight-ready, low-cost, Protoflight software defined radio (SDR) and Electronically Steerable Patch Array (ESPA) antenna subsystems for use on platforms as small as CubeSats and unmanned aircraft systems (UASs). The current telemetry system is slightly larger in dimension of footprint than required to fit within a 0.5U CubeSat volume. Extensible and modular communications for CubeSat technologies will partially mitigate current capability gaps between traditional strategic space platforms and lower-cost small satellite solutions. Higher bandwidth capacity will enable high-volume, low error-rate data transfer to and from tactical forces or sensors operating in austere locations (e.g., direct imagery download, unattended ground sensor data exfiltration, interlink communications), while also providing additional bandwidth and error correction margin to accommodate more complex encryption algorithms and higher user volume.
NASA Technical Reports Server (NTRS)
Strangman, Gary; Culver, Joseph P.; Thompson, John H.; Boas, David A.; Sutton, J. P. (Principal Investigator)
2002-01-01
Near-infrared spectroscopy (NIRS) has been used to noninvasively monitor adult human brain function in a wide variety of tasks. While rough spatial correspondences with maps generated from functional magnetic resonance imaging (fMRI) have been found in such experiments, the amplitude correspondences between the two recording modalities have not been fully characterized. To do so, we simultaneously acquired NIRS and blood-oxygenation level-dependent (BOLD) fMRI data and compared Delta(1/BOLD) (approximately R(2)(*)) to changes in oxyhemoglobin, deoxyhemoglobin, and total hemoglobin concentrations derived from the NIRS data from subjects performing a simple motor task. We expected the correlation with deoxyhemoglobin to be strongest, due to the causal relation between changes in deoxyhemoglobin concentrations and BOLD signal. Instead we found highly variable correlations, suggesting the need to account for individual subject differences in our NIRS calculations. We argue that the variability resulted from systematic errors associated with each of the signals, including: (1) partial volume errors due to focal concentration changes, (2) wavelength dependence of this partial volume effect, (3) tissue model errors, and (4) possible spatial incongruence between oxy- and deoxyhemoglobin concentration changes. After such effects were accounted for, strong correlations were found between fMRI changes and all optical measures, with oxyhemoglobin providing the strongest correlation. Importantly, this finding held even when including scalp, skull, and inactive brain tissue in the average BOLD signal. This may reflect, at least in part, the superior contrast-to-noise ratio for oxyhemoglobin relative to deoxyhemoglobin (from optical measurements), rather than physiology related to BOLD signal interpretation.
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Cortical thickness measurement from magnetic resonance images using partial volume estimation
NASA Astrophysics Data System (ADS)
Zuluaga, Maria A.; Acosta, Oscar; Bourgeat, Pierrick; Hernández Hoyos, Marcela; Salvado, Olivier; Ourselin, Sébastien
2008-03-01
Measurement of the cortical thickness from 3D Magnetic Resonance Imaging (MRI) can aid diagnosis and longitudinal studies of a wide range of neurodegenerative diseases. We estimate the cortical thickness using a Laplacian approach whereby equipotentials analogous to layers of tissue are computed. The thickness is then obtained using an Eulerian approach where partial differential equations (PDE) are solved, avoiding the explicit tracing of trajectories along the streamlines gradient. This method has the advantage of being relatively fast and insure unique correspondence points between the inner and outer boundaries of the cortex. The original method is challenged when the thickness of the cortex is of the same order of magnitude as the image resolution since partial volume (PV) effect is not taken into account at the gray matter (GM) boundaries. We propose a novel way to take into account PV which improves substantially accuracy and robustness. We model PV by computing a mixture of pure Gaussian probability distributions and use this estimate to initialize the cortical thickness estimation. On synthetic phantoms experiments, the errors were divided by three while reproducibility was improved when the same patients was scanned three consecutive times.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
A theory for predicting composite laminate warpage resulting from fabrication
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1975-01-01
Linear laminate theory is used in conjunction with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Using these equations, it is found that a 1 deg error in the orientation angle of one ply is sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8-ply Mod-I/epoxy. From a sensitivity analysis on the governing parameters, it is found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.
Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J
2009-10-01
To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.
Zaharchuk, Greg; Busse, Reed F; Rosenthal, Guy; Manley, Geoffery T; Glenn, Orit A; Dillon, William P
2006-08-01
The oxygen partial pressure (pO2) of human body fluids reflects the oxygenation status of surrounding tissues. All existing fluid pO2 measurements are invasive, requiring either microelectrode/optode placement or fluid removal. The purpose of this study is to develop a noninvasive magnetic resonance imaging method to measure the pO2 of human body fluids. We developed an imaging paradigm that exploits the paramagnetism of molecular oxygen to create quantitative images of fluid oxygenation. A single-shot fast spin echo pulse sequence was modified to minimize artifacts from motion, fluid flow, and partial volume. Longitudinal relaxation rate (R1 = 1/T1) was measured with a time-efficient nonequilibrium saturation recovery method and correlated with pO2 measured in phantoms. pO2 images of human and fetal cerebrospinal fluid, bladder urine, and vitreous humor are presented and quantitative oxygenation levels are compared with prior literature estimates, where available. Significant pO2 increases are shown in cerebrospinal fluid and vitreous following 100% oxygen inhalation. Potential errors due to temperature, fluid flow, and partial volume are discussed. Noninvasive measurements of human body fluid pO2 in vivo are presented, which yield reasonable values based on prior literature estimates. This rapid imaging-based measurement of fluid oxygenation may provide insight into normal physiology as well as changes due to disease or during treatment.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Fluid balance concepts in medicine: Principles and practice
Roumelioti, Maria-Eleni; Glew, Robert H; Khitan, Zeid J; Rondon-Berrios, Helbert; Argyropoulos, Christos P; Malhotra, Deepak; Raj, Dominic S; Agaba, Emmanuel I; Rohrscheib, Mark; Murata, Glen H; Shapiro, Joseph I; Tzamaloukas, Antonios H
2018-01-01
The regulation of body fluid balance is a key concern in health and disease and comprises three concepts. The first concept pertains to the relationship between total body water (TBW) and total effective solute and is expressed in terms of the tonicity of the body fluids. Disturbances in tonicity are the main factor responsible for changes in cell volume, which can critically affect brain cell function and survival. Solutes distributed almost exclusively in the extracellular compartment (mainly sodium salts) and in the intracellular compartment (mainly potassium salts) contribute to tonicity, while solutes distributed in TBW have no effect on tonicity. The second body fluid balance concept relates to the regulation and measurement of abnormalities of sodium salt balance and extracellular volume. Estimation of extracellular volume is more complex and error prone than measurement of TBW. A key function of extracellular volume, which is defined as the effective arterial blood volume (EABV), is to ensure adequate perfusion of cells and organs. Other factors, including cardiac output, total and regional capacity of both arteries and veins, Starling forces in the capillaries, and gravity also affect the EABV. Collectively, these factors interact closely with extracellular volume and some of them undergo substantial changes in certain acute and chronic severe illnesses. Their changes result not only in extracellular volume expansion, but in the need for a larger extracellular volume compared with that of healthy individuals. Assessing extracellular volume in severe illness is challenging because the estimates of this volume by commonly used methods are prone to large errors in many illnesses. In addition, the optimal extracellular volume may vary from illness to illness, is only partially based on volume measurements by traditional methods, and has not been determined for each illness. Further research is needed to determine optimal extracellular volume levels in several illnesses. For these reasons, extracellular volume in severe illness merits a separate third concept of body fluid balance. PMID:29359117
Pitfalls in 16-detector row CT of the coronary arteries.
Nakanishi, Tadashi; Kayashima, Yasuyo; Inoue, Rintaro; Sumii, Kotaro; Gomyo, Yukihiko
2005-01-01
Recently developed 16-detector row computed tomography (CT) has been introduced as a reliable noninvasive imaging modality for evaluating the coronary arteries. In most cases, with appropriate premedication that includes beta-blockers and nitroglycerin, ideal data sets can be acquired from which to obtain excellent-quality coronary CT angiograms, most often with multiplanar reformation, thin-slab maximum intensity projection, and volume rendering. However, various artifacts associated with data creation and reformation, postprocessing methods, and image interpretation can hamper accurate diagnosis. These artifacts can be related to pulsation (nonassessable segments, pseudostenosis) as well as rhythm disorders, respiratory issues, partial volume averaging effect, high-attenuation entities, inappropriate scan pitch, contrast material enhancement, and patient body habitus. Some artifacts have already been resolved with technical advances, whereas others represent partially inherent limitations of coronary CT angiography. Familiarity with the pitfalls of coronary angiography with 16-detector row CT, coupled with the knowledge of both the normal anatomy and anatomic variants of the coronary arteries, can almost always help radiologists avoid interpretive errors in the diagnosis of coronary artery stenosis. (c) RSNA, 2005.
Leonard, Charles E; Tallhamer, Michael; Johnson, Tim; Hunter, Kari; Howell, Kathryn; Kercher, Jane; Widener, Jodi; Kaske, Terese; Paul, Devchand; Sedlacek, Scot; Carter, Dennis L
2010-02-01
To explore the feasibility of fiducial markers for the use of image-guided radiotherapy (IGRT) in an accelerated partial breast intensity modulated radiotherapy protocol. Nineteen patients consented to an institutional review board approved protocol of accelerated partial breast intensity-modulated radiotherapy with fiducial marker placement and treatment with IGRT. Patients (1 patient with bilateral breast cancer; 20 total breasts) underwent ultrasound guided implantation of three 1.2- x 3-mm gold markers placed around the surgical cavity. For each patient, table shifts (inferior/superior, right/left lateral, and anterior/posterior) and minimum, maximum, mean error with standard deviation were recorded for each of the 10 BID treatments. The dose contribution of daily orthogonal films was also examined. All IGRT patients underwent successful marker placement. In all, 200 IGRT treatment sessions were performed. The average vector displacement was 4 mm (range, 2-7 mm). The average superior/inferior shift was 2 mm (range, 0-5 mm), the average lateral shift was 2 mm (range, 1-4 mm), and the average anterior/posterior shift was 3 mm (range, 1 5 mm). This study shows that the use of IGRT can be successfully used in an accelerated partial breast intensity-modulated radiotherapy protocol. The authors believe that this technique has increased daily treatment accuracy and permitted reduction in the margin added to the clinical target volume to form the planning target volume. Copyright 2010 Elsevier Inc. All rights reserved.
Gholipour, Ali; Afacan, Onur; Aganj, Iman; Scherrer, Benoit; Prabhu, Sanjay P; Sahin, Mustafa; Warfield, Simon K
2015-12-01
To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) of image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans.
NASA Astrophysics Data System (ADS)
Rucker, D. Caleb; Wu, Yifei; Ondrake, Janet E.; Pheiffer, Thomas S.; Simpson, Amber L.; Miga, Michael I.
2013-03-01
In the context of open abdominal image-guided liver surgery, the efficacy of an image-guidance system relies on its ability to (1) accurately depict tool locations with respect to the anatomy, and (2) maintain the work flow of the surgical team. Laser-range scanned (LRS) partial surface measurements can be taken intraoperatively with relatively little impact on the surgical work flow, as opposed to other intraoperative imaging modalities. Previous research has demonstrated that this kind of partial surface data may be (1) used to drive a rigid registration of the preoperative CT image volume to intraoperative patient space, and (2) extrapolated and combined with a tissue-mechanics-based organ model to drive a non-rigid registration, thus compensating for organ deformations. In this paper we present a novel approach for intraoperative nonrigid liver registration which iteratively reconstructs a displacement field on the posterior side of the organ in order to minimize the error between the deformed model and the intraopreative surface data. Experimental results with a phantom liver undergoing large deformations demonstrate that this method achieves target registration errors (TRE) with a mean of 4.0 mm in the prediction of a set of 58 locations inside the phantom, which represents a 50% improvement over rigid registration alone, and a 44% improvement over the prior non-iterative single-solve method of extrapolating boundary conditions via a surface Laplacian.
Häckel, M; Hinz, H J; Hedwig, G R
1999-11-15
The partial molar volumes of tripeptides of sequence glycyl-X-glycine, where X is one of the amino acids alanine, leucine, threonine, glutamine, phenylalanine, histidine, cysteine, proline, glutamic acid, and arginine, have been determined in aqueous solution over the temperature range 10-90 degrees C using differential scanning densitometry . These data, together with those reported previously, have been used to derive the partial molar volumes of the side-chains of all 20 amino acids. The side-chain volumes are critically compared with literature values derived using partial molar volumes for alternative model compounds. The new amino acid side-chain volumes, along with that for the backbone glycyl group, were used to calculate the partial specific volumes of several proteins in aqueous solution. The results obtained are compared with those observed experimentally. The new side-chain volumes have also been used to re-determine residue volume changes upon protein folding.
Converting international ¼ inch tree volume to Doyle
Aaron Holley; John R. Brooks; Stuart A. Moss
2014-01-01
An equation for converting Mesavage and Girard's International ¼ inch tree volumes to the Doyle log rule is presented as a function of tree diameter. Volume error for trees having less than four logs exhibited volume prediction errors within a range of ±10 board feet. In addition, volume prediction error as a percent of actual Doyle tree volume...
An approach to get thermodynamic properties from speed of sound
NASA Astrophysics Data System (ADS)
Núñez, M. A.; Medina, L. A.
2017-01-01
An approach for estimating thermodynamic properties of gases from the speed of sound u, is proposed. The square u2, the compression factor Z and the molar heat capacity at constant volume C V are connected by two coupled nonlinear partial differential equations. Previous approaches to solving this system differ in the conditions used on the range of temperature values [Tmin,Tmax]. In this work we propose the use of Dirichlet boundary conditions at Tmin, Tmax. The virial series of the compression factor Z = 1+Bρ+Cρ2+… and other properties leads the problem to the solution of a recursive set of linear ordinary differential equations for the B, C. Analytic solutions of the B equation for Argon are used to study the stability of our approach and previous ones under perturbation errors of the input data. The results show that the approach yields B with a relative error bounded basically by that of the boundary values and the error of other approaches can be some orders of magnitude lager.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Woolley, Josh D; Strobl, Eric V; Sturm, Virginia E; Shany-Ur, Tal; Poorzand, Pardis; Grossman, Scott; Nguyen, Lauren; Eckart, Janet A; Levenson, Robert W; Seeley, William W; Miller, Bruce L; Rankin, Katherine P
2015-10-01
The ventroanterior insula is implicated in the experience, expression, and recognition of disgust; however, whether this brain region is required for recognizing disgust or regulating disgusting behaviors remains unknown. We examined the brain correlates of the presence of disgusting behavior and impaired recognition of disgust using voxel-based morphometry in a sample of 305 patients with heterogeneous patterns of neurodegeneration. Permutation-based analyses were used to determine regions of decreased gray matter volume at a significance level p <= .05 corrected for family-wise error across the whole brain and within the insula. Patients with behavioral variant frontotemporal dementia and semantic variant primary progressive aphasia were most likely to exhibit disgusting behaviors and were, on average, the most impaired at recognizing disgust in others. Imaging analysis revealed that patients who exhibited disgusting behaviors had significantly less gray matter volume bilaterally in the ventral anterior insula. A region of interest analysis restricted to behavioral variant frontotemporal dementia and semantic variant primary progressive aphasia patients alone confirmed this result. Moreover, impaired recognition of disgust was associated with decreased gray matter volume in the bilateral ventroanterior and ventral middle regions of the insula. There was an area of overlap in the bilateral anterior insula where decreased gray matter volume was associated with both the presence of disgusting behavior and impairments in recognizing disgust. These findings suggest that regulating disgusting behaviors and recognizing disgust in others involve two partially overlapping neural systems within the insula. Moreover, the ventral anterior insula is required for both processes. Published by Elsevier Inc.
Woolley, Joshua; Strobl, Eric V; Sturm, Virginia E; Shany-Ur, Tal; Poorzand, Pardis; Grossman, Scott; Nguyen, Lauren; Eckart, Janet A; Levenson, Robert W; Seeley, William W; Miller, Bruce L; Rankin, Katherine P
2015-01-01
Background The ventroanterior insula is implicated in the experience, expression, and recognition of disgust; however, whether this brain region is required for recognizing disgust or regulating disgusting behaviors remains unknown. Methods We examined the brain correlates of the presence of disgusting behavior and impaired recognition of disgust using voxel-based morphometry in a sample of 305 patients with heterogeneous patterns of neurodegeneration. Permutation-based analyses were used to determine regions of decreased grey matter volume at a significance level p<0.05 corrected for family-wise error across the whole brain and within the insula. Results Patients with behavioral variant frontotemporal dementia (bvFTD) and semantic variant primary progressive aphasia (svPPA) were most likely to exhibit disgusting behaviors and were, on average, the most impaired at recognizing disgust in others. Imaging analysis revealed that patients who exhibited disgusting behaviors had significantly less grey matter volume bilaterally in the ventral anterior insula. A region of interest analysis restricted to bvFTD and svPPA patients alone confirmed this result. Moreover, impaired recognition of disgust was associated with decreased grey matter volume in the bilateral ventroanterior and ventral middle regions of the insula. There was an area of overlap in the bilateral anterior insula where decreased grey matter volume was associated with both the presence of disgusting behavior and impairments in recognizing disgust. Conclusion These findings suggest that regulating disgusting behaviors and recognizing disgust in others involve two partially overlapping neural systems within the insula. Moreover, the ventral anterior insula is required for both processes. PMID:25890642
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Fernández-Novales, Juan; López, María-Isabel; González-Caballero, Virginia; Ramírez, Pilar; Sánchez, María-Teresa
2011-06-01
Volumic mass-a key component of must quality control tests during alcoholic fermentation-is of great interest to the winemaking industry. Transmitance near-infrared (NIR) spectra of 124 must samples over the range of 200-1,100-nm were obtained using a miniature spectrometer. The performance of this instrument to predict volumic mass was evaluated using partial least squares (PLS) regression and multiple linear regression (MLR). The validation statistics coefficient of determination (r(2)) and the standard error of prediction (SEP) were r(2) = 0.98, n = 31 and r(2) = 0.96, n = 31, and SEP = 5.85 and 7.49 g/dm(3) for PLS and MLR equations developed to fit reference data for volumic mass and spectral data. Comparison of results from MLR and PLS demonstrates that a MLR model with six significant wavelengths (P < 0.05) fit volumic mass data to transmittance (1/T) data slightly worse than a more sophisticated PLS model using the full scanning range. The results suggest that NIR spectroscopy is a suitable technique for predicting volumic mass during alcoholic fermentation, and that a low-cost NIR instrument can be used for this purpose.
NASA Astrophysics Data System (ADS)
Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.
2008-02-01
Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.
Context-dependent sequential effects of target selection for action.
Moher, Jeff; Song, Joo-Hyun
2013-07-11
Humans exhibit variation in behavior from moment to moment even when performing a simple, repetitive task. Errors are typically followed by cautious responses, minimizing subsequent distractor interference. However, less is known about how variation in the execution of an ultimately correct response affects subsequent behavior. We asked participants to reach toward a uniquely colored target presented among distractors and created two categories to describe participants' responses in correct trials based on analyses of movement trajectories; partial errors referred to trials in which observers initially selected a nontarget for action before redirecting the movement and accurately pointing to the target, and direct movements referred to trials in which the target was directly selected for action. We found that latency to initiate a hand movement was shorter in trials following partial errors compared to trials following direct movements. Furthermore, when the target and distractor colors were repeated, movement time and reach movement curvature toward distractors were greater following partial errors compared to direct movements. Finally, when the colors were repeated, partial errors were more frequent than direct movements following partial-error trials, and direct movements were more frequent following direct-movement trials. The dependence of these latter effects on repeated-task context indicates the involvement of higher-level cognitive mechanisms in an integrated attention-action system in which execution of a partial-error or direct-movement response affects memory representations that bias performance in subsequent trials. Altogether, these results demonstrate that whether a nontarget is selected for action or not has a measurable impact on subsequent behavior.
Registration of 2D to 3D joint images using phase-based mutual information
NASA Astrophysics Data System (ADS)
Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul
2007-03-01
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.
Effects of the liver volume and donor steatosis on errors in the estimated standard liver volume.
Siriwardana, Rohan Chaminda; Chan, See Ching; Chok, Kenneth Siu Ho; Lo, Chung Mau; Fan, Sheung Tat
2011-12-01
An accurate assessment of donor and recipient liver volumes is essential in living donor liver transplantation. Many liver donors are affected by mild to moderate steatosis, and steatotic livers are known to have larger volumes. This study analyzes errors in liver volume estimation by commonly used formulas and the effects of donor steatosis on these errors. Three hundred twenty-five Asian donors who underwent right lobe donor hepatectomy were the subjects of this study. The percentage differences between the liver volumes from computed tomography (CT) and the liver volumes estimated with each formula (ie, the error percentages) were calculated. Five popular formulas were tested. The degrees of steatosis were categorized as follows: no steatosis [n = 178 (54.8%)], ≤ 10% steatosis [n = 128 (39.4%)], and >10% to 20% steatosis [n = 19 (5.8%)]. The median errors ranged from 0.6% (7 mL) to 24.6% (360 mL). The lowest was seen with the locally derived formula. All the formulas showed a significant association between the error percentage and the CT liver volume (P < 0.001). Overestimation was seen with smaller liver volumes, whereas underestimation was seen with larger volumes. The locally derived formula was most accurate when the liver volume was 1001 to 1250 mL. A multivariate analysis showed that the estimation error was dependent on the liver volume (P = 0.001) and the anthropometric measurement that was used in the calculation (P < 0.001) rather than steatosis (P ≥ 0.07). In conclusion, all the formulas have a similar pattern of error that is possibly related to the anthropometric measurement. Clinicians should be aware of this pattern of error and the liver volume with which their formula is most accurate. Copyright © 2011 American Association for the Study of Liver Diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...
2017-01-01
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
Foitzik, B; Schmalisch, G; Wauer, R R
1994-04-01
The measurement of ventilation in neonates has a number of specific characteristics; in contrast to lung function testing in adults, the inspiratory gas for neonates is often conditioned. In pneumotachographs (PNT) based on Hagen-Poiseuille's law, changes in physical characteristics of respiratory gas (temperature, humidity, pressure and oxygen fraction [FiO2]) produce a volume change as calculated with the ideal gas equation p*V/T = const; in addition, the viscosity of the gas is also changed, thus leading to measuring errors. In clinical practice, the effect of viscosity on volume measurement is often ignored. The accuracy of these empirical laws was investigated in a size 0 Fleisch-PNT using a flow-through technique and variously processed respiratory gas. Spontaneous breathing was simulated with the aid of a calibration syringe (20 ml) and a rate of 30 min-1. The largest change in viscosity (11.6% at 22 degrees C and dry gas) is found with an increase in FiO2 (21...100%). A rise in temperature from 24 to 35 degrees C (dry air) produced an increase in viscosity of 5.2%. An increase of humidity (0...90%, 35 degrees C) decreased the viscosity by 3%. A partial compensation of these viscosity errors is thus possible. Pressure change (0...50 mbar, under ambient conditions) caused no measurable viscosity error. With the exception of temperature, the measurements have shown good agreement between the measured volume measuring errors and those calculated from viscosity changes. If the respiratory gas differs from ambient air (e.g. elevated FiO2) or if the PNT is calibrated under BTPS conditions, changes in viscosity must not be neglected when performing accurate ventilation measurements. On the basis of the well-known physical laws of Dalton, Thiesen and Sutherland, a numerical correction of adequate accuracy is possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholipour, Ali, E-mail: ali.gholipour@childrens.harvard.edu; Afacan, Onur; Scherrer, Benoit
Purpose: To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. Methods: The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) ofmore » image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Results: Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Conclusions: Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans.« less
Gholipour, Ali; Afacan, Onur; Aganj, Iman; Scherrer, Benoit; Prabhu, Sanjay P.; Sahin, Mustafa; Warfield, Simon K.
2015-01-01
Purpose: To compare and evaluate the use of super-resolution reconstruction (SRR), in frequency, image, and wavelet domains, to reduce through-plane partial voluming effects in magnetic resonance imaging. Methods: The reconstruction of an isotropic high-resolution image from multiple thick-slice scans has been investigated through techniques in frequency, image, and wavelet domains. Experiments were carried out with thick-slice T2-weighted fast spin echo sequence on the Academic College of Radiology MRI phantom, where the reconstructed images were compared to a reference high-resolution scan using peak signal-to-noise ratio (PSNR), structural similarity image metric (SSIM), mutual information (MI), and the mean absolute error (MAE) of image intensity profiles. The application of super-resolution reconstruction was then examined in retrospective processing of clinical neuroimages of ten pediatric patients with tuberous sclerosis complex (TSC) to reduce through-plane partial voluming for improved 3D delineation and visualization of thin radial bands of white matter abnormalities. Results: Quantitative evaluation results show improvements in all evaluation metrics through super-resolution reconstruction in the frequency, image, and wavelet domains, with the highest values obtained from SRR in the image domain. The metric values for image-domain SRR versus the original axial, coronal, and sagittal images were PSNR = 32.26 vs 32.22, 32.16, 30.65; SSIM = 0.931 vs 0.922, 0.924, 0.918; MI = 0.871 vs 0.842, 0.844, 0.831; and MAE = 5.38 vs 7.34, 7.06, 6.19. All similarity metrics showed high correlations with expert ranking of image resolution with MI showing the highest correlation at 0.943. Qualitative assessment of the neuroimages of ten TSC patients through in-plane and out-of-plane visualization of structures showed the extent of partial voluming effect in a real clinical scenario and its reduction using SRR. Blinded expert evaluation of image resolution in resampled out-of-plane views consistently showed the superiority of SRR compared to original axial and coronal image acquisitions. Conclusions: Thick-slice 2D T2-weighted MRI scans are part of many routine clinical protocols due to their high signal-to-noise ratio, but are often severely affected by through-plane partial voluming effects. This study shows that while radiologic assessment is performed in 2D on thick-slice scans, super-resolution MRI reconstruction techniques can be used to fuse those scans to generate a high-resolution image with reduced partial voluming for improved postacquisition processing. Qualitative and quantitative evaluation showed the efficacy of all SRR techniques with the best results obtained from SRR in the image domain. The limitations of SRR techniques are uncertainties in modeling the slice profile, density compensation, quantization in resampling, and uncompensated motion between scans. PMID:26632048
Jeffrey, P D; Nichol, L W; Smith, G D
1975-01-25
A method is presented by which an experimental record of total concentration as a function of radial distance, obtained in a sedimentation equilibrium experiment conducted with a noninteracting mixture in the absence of a density gradient, may be analyzed to obtain the unimodal distributions of molecular weight and of partial molar volume when these vary concomitantly and continuously. Particular attention is given to the caracterization of classes of lipoproteins exhibiting Gaussian distributions of these quantities, although the analysis is applicable to other types of unimodal distribution. Equations are also formulated permitting the definition of the corresponding distributions of partial specific volume and of density. The analysis procedure is based on a method (employing Laplace transforms) developed previously, but differs from it in that it avoids the necessity of differentiating experimental results, which introduces error. The method offers certain advantages over other procedures used to characterize and compare lipoprotein samples (exhibiting unimodal distributions) with regard to the duration of the experiment, economy of the sample, and, particularly, the ability to define in principle all of the relevant distributions from one sedimentation equilibrium experiment and an external measurement of the weight average partial specific volume. These points and the steps in the analysis procedure are illustrated with experimental results obtained in the sedimentation equilibrium of a sample of human serum low density lipoprotein. The experimental parameters (such as solution density, column height, and angular velocity) used in the conduction of these experiments were selected on the basis of computer-simulated examples, which are also presented. These provide a guide for other workers interested in characterizing lipoproteins of this class.
Del Galdo, Sara; Amadei, Andrea
2016-10-12
In this paper we apply the computational analysis recently proposed by our group to characterize the solvation properties of a native protein in aqueous solution, and to four model aqueous solutions of globular proteins in their unfolded states thus characterizing the protein unfolded state hydration shell and quantitatively evaluating the protein unfolded state partial molar volumes. Moreover, by using both the native and unfolded protein partial molar volumes, we obtain the corresponding variations (unfolding partial molar volumes) to be compared with the available experimental estimates. We also reconstruct the temperature and pressure dependence of the unfolding partial molar volume of Myoglobin dissecting the structural and hydration effects involved in the process.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Guinea Pig Ciliary Muscle Development
Pucker, Andrew D.; Carpenter, Ashley R.; McHugh, Kirk M.; Mutti, Donald O.
2014-01-01
Purpose The purpose of this study was to develop a method for quantifying guinea pig ciliary muscle volume (CMV) and to determine its relationship to age and ocular biometric measurements. Methods Six albino guinea pigs eyes were collected at each of five ages (n=30 eyes). Retinoscopy and photography were used to document refractive error, eye size, and eye shape. Serial sections through the excised eyes were made and then labeled with an α-smooth muscle actin antibody. The CM was then visualized with an Olympus BX51 microscope, reconstructed with Stereo Investigator (MBF Bioscience) and analyzed using Neurolucida Explorer (MBF Bioscience). Full (using all sections) and partial (using a subset of sections) reconstruction methods were used to determine CMV. Results There was no significant difference between the full and partial volume determination methods (P = 0.86). The mean CMV of the 1, 10, 20, 30, and 90-day old eyes was 0.40 ± 0.16 mm3, 0.48 ± 0.13 mm3, 0.67 ± 0.15 mm3, 0.86 ± 0.35 mm3, and 1.09 ± 0.63 mm3, respectively. CMV was significantly correlated with log age (P = 0.001), ocular length (P = 0.003), limbal circumference (P = 0.01), and equatorial diameter (P = 0.003). It was not correlated with refractive error (P = 0.73) or eye shape (P = 0.60). Multivariate regression determined that biometric variables were not significantly associated with CMV after adjustment for age. Conclusions Three-dimensional reconstruction was an effective means of determining CMV. These data provide evidence that CM growth occurs with age in tandem with eye size in normal albino guinea pigs. Additional work is needed to determine the relationship between CMV and abnormal ocular growth. PMID:24901488
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H
Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less
Nonlinear grid error effects on numerical solution of partial differential equations
NASA Technical Reports Server (NTRS)
Dey, S. K.
1980-01-01
Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.
NASA Astrophysics Data System (ADS)
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.
Changes to Hospital Inpatient Volume After Newspaper Reporting of Medical Errors.
Fukuda, Haruhisa
2017-06-30
The aim of this study was to investigate the influence of medical error case reporting by national newspapers on inpatient volume at acute care hospitals. A case-control study was conducted using the article databases of 3 major Japanese newspapers with nationwide circulation between fiscal years 2012 and 2013. Data on inpatient volume at acute care hospitals were obtained from a Japanese government survey between fiscal years 2011 and 2014. Panel data were constructed and analyzed using a difference-in-differences design. Acute care hospitals in Japan. Hospitals named in articles that included the terms "medical error" and "hospital" were designated case hospitals, which were matched with control hospitals using corresponding locations, nurse-to-patient ratios, and bed numbers. Medical error case reporting in newspapers. Changes to hospital inpatient volume after error reports. The sample comprised 40 case hospitals and 40 control hospitals. Difference-in-differences analyses indicated that newspaper reporting of medical errors was not significantly associated (P = 0.122) with overall inpatient volume. Medical error case reporting by newspapers showed no influence on inpatient volume. Hospitals therefore have little incentive to respond adequately and proactively to medical errors. There may be a need for government intervention to improve the posterror response and encourage better health care safety.
Parsing partial molar volumes of small molecules: a molecular dynamics study.
Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V
2011-04-28
We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.
Graziano, Giuseppe
2006-04-07
The partial molar volume of n-alcohols at infinite dilution in water is smaller than the molar volume in the neat liquid phase. It is shown that the formula for the partial molar volume at infinite dilution obtained from the scaled particle theory equation of state for binary hard sphere mixtures is able to reproduce in a satisfactory manner the experimental data over a large temperature range. This finding implies that the packing effects play the fundamental role in determining the partial molar volume at infinite dilution in water also for solutes, such as n-alcohols, forming H bonds with water molecules. Since the packing effects in water are largely related to the small size of its molecules, the latter feature is the ultimate cause of the decrease in partial molar volume associated with the hydrophobic effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.
Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less
Low-energy pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Ai, Li; Kaufmann, W. B.
1998-02-01
An analysis of low-energy charged pion-nucleon data from recent π+/-p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f2=0.0756+/-0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P31 and P13 partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the Σ term. Off-shell amplitudes are also provided.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Schwarz, Daniel A.; Arman, Krikor G.; Kakwan, Mehreen S.; Jamali, Ameen M.; Elmeligy, Ayman A.; Buchman, Steven R.
2015-01-01
Background The authors’ goal was to ascertain regenerate bone-healing metrics using quantitative histomorphometry at a single consolidation period. Methods Rats underwent either mandibular distraction osteogenesis (n=7) or partially reduced fractures (n=7); their contralateral mandibles were used as controls (n=11). External fixators were secured and unilateral osteotomies performed, followed by either mandibular distraction osteogenesis (4 days’ latency, then 0.3 mm every 12 hours for 8 days; 5.1 mm) or partially reduced fractures (fixed immediately postoperatively; 2.1 mm); both groups underwent 4 weeks of consolidation. After tissue processing, bone volume/tissue volume ratio, osteoid volume/tissue volume ratio, and osteocyte count per high-power field were analyzed by means of quantitative histomorphometry. Results Contralateral mandibles had statistically greater bone volume/tissue volume ratio and osteocyte count per high-power field compared with both mandibular distraction osteogenesis and partially reduced fractures by almost 50 percent, whereas osteoid volume/tissue volume ratio was statistically greater in both mandibular distraction osteogenesis specimens and partially reduced fractures compared with contralateral mandibles. No statistical difference in bone volume/tissue volume ratio, osteoid volume/tissue volume ratio, or osteocyte count per high-power field was found between mandibular distraction osteogenesis specimens and partially reduced fractures. Conclusions The authors’ findings demonstrate significantly decreased bone quantity and maturity in mandibular distraction osteogenesis specimens and partially reduced fractures compared with contralateral mandibles using the clinically analogous protocols. If these results are extrapolated clinically, treatment strategies may require modification to ensure reliable, predictable, and improved outcomes. PMID:20463629
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
Tachibana, Hidekazu; Takagi, Toshio; Kondo, Tsunenori; Ishida, Hideki; Tanabe, Kazunari
2018-04-01
To compare surgical outcomes, including renal function and the preserved renal parenchymal volume, between robot-assisted laparoscopic partial nephrectomy and laparoscopic partial nephrectomy using propensity score-matched analyses. In total, 253 patients, with a normal contralateral kidney, who underwent laparoscopic partial nephrectomy (n = 131) or robot-assisted laparoscopic partial nephrectomy (n = 122) with renal arterial clamping between 2010 and 2015, were included. Patients' background and tumor factors were adjusted by propensity score matching. Surgical outcomes, including postoperative renal function, complications, warm ischemia time and preserved renal parenchymal volume, evaluated by volumetric analysis, were compared between the surgical procedures. After matching, 64 patients were assigned to each group. The mean age was 56-57 years, and the mean tumor size was 22 mm. Approximately 50% of patients had low complexity tumors (RENAL nephrometry score 4-7). The incidence rate of acute kidney failure was significantly lower in the robot-assisted laparoscopic partial nephrectomy (11%) than laparoscopic partial nephrectomy (23%) group (P = 0.049), and warm ischemia time shorter in the robot-assisted laparoscopic partial nephrectomy (17 min) than laparoscopic partial nephrectomy (25 min) group (P < 0.0001). The preservation rate of renal function, measured by the estimated glomerular filtration rate, at 6 months post-surgery was 96% for robot-assisted laparoscopic partial nephrectomy and 90% for laparoscopic partial nephrectomy (P < 0.0001). The preserved renal parenchymal volume was higher for robot-assisted laparoscopic partial nephrectomy (89%) than laparoscopic partial nephrectomy (77%; P < 0.0001). The rate of perioperative complications, surgical margin status and length of hospital stay were equivalent for both techniques. Robot-assisted laparoscopic partial nephrectomy allows to achieve better preservation of renal function and parenchymal volume than laparoscopic partial nephrectomy. © 2018 The Japanese Urological Association.
NASA Astrophysics Data System (ADS)
Soret, Marine; Alaoui, Jawad; Koulibaly, Pierre M.; Darcourt, Jacques; Buvat, Irène
2007-02-01
ObjectivesPartial volume effect (PVE) is a major source of bias in brain SPECT imaging of dopamine transporter. Various PVE corrections (PVC) making use of anatomical data have been developed and yield encouraging results. However, their accuracy in clinical data is difficult to demonstrate because the gold standard (GS) is usually unknown. The objective of this study was to assess the accuracy of PVC. MethodTwenty-three patients underwent MRI and 123I-FP-CIT SPECT. The binding potential (BP) values were measured in the striata segmented on the MR images after coregistration to SPECT images. These values were calculated without and with an original PVC. In addition, for each patient, a Monte Carlo simulation of the SPECT scan was performed. For these simulations where true simulated BP values were known, percent biases in BP estimates were calculated. For the real data, an evaluation method that simultaneously estimates the GS and a quadratic relationship between the observed and the GS values was used. It yields a surrogate mean square error (sMSE) between the estimated values and the estimated GS values. ResultsThe averaged percent difference between BP measured for real and for simulated patients was 0.7±9.7% without PVC and was -8.5±14.5% with PVC, suggesting that the simulated data reproduced the real data well enough. For the simulated patients, BP was underestimated by 66.6±9.3% on average without PVC and overestimated by 11.3±9.5% with PVC, demonstrating the greatest accuracy of BP estimates with PVC. For the simulated data, sMSE were 27.3 without PVC and 0.90 with PVC, confirming that our sMSE index properly captured the greatest accuracy of BP estimates with PVC. For the real patient data, sMSE was 50.8 without PVC and 3.5 with PVC. These results were consistent with those obtained on the simulated data, suggesting that for clinical data, and despite probable segmentation and registration errors, BP were more accurately estimated with PVC than without. ConclusionPVC was very efficient to greatly reduce the error in BP estimates in clinical imaging of dopamine transporter.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Pfützner, Andreas; Schipper, Christina; Ramljak, Sanja; Flacke, Frank; Sieber, Jochen; Forst, Thomas; Musholt, Petra B
2013-11-01
Accuracy of blood glucose readings is (among other things) dependent on the test strip being completely filled with sufficient sample volume. The devices are supposed to display an error message in case of incomplete filling. This laboratory study was performed to test the performance of 31 commercially available devices in case of incomplete strip filling. Samples with two different glucose levels (60-90 and 300-350 mg/dl) were used to generate three different sample volumes: 0.20 µl (too low volume for any device), 0.32 µl (borderline volume), and 1.20 µl (low but supposedly sufficient volume for all devices). After a point-of-care capillary reference measurement (StatStrip, NovaBiomedical), the meter strip was filled (6x) with the respective volume, and the response of the meters (two devices) was documented (72 determinations/meter type). Correct response was defined as either an error message indicating incomplete filling or a correct reading (±20% compared with reference reading). Only five meters showed 100% correct responses [BGStar and iBGStar (both Sanofi), ACCU-CHEK Compact+ and ACCU-CHEK Mobile (both Roche Diagnostics), OneTouch Verio (LifeScan)]. The majority of the meters (17) had up to 10% incorrect reactions [predominantly incorrect readings with sufficient volume; Precision Xceed and Xtra, FreeStyle Lite, and Freedom Lite (all Abbott); GlucoCard+ and GlucoMen GM (both Menarini); Contour, Contour USB, and Breeze2 (all Bayer); OneTouch Ultra Easy, Ultra 2, and Ultra Smart (all LifeScan); Wellion Dialog and Premium (both MedTrust); FineTouch (Terumo); ACCU-CHEK Aviva (Roche); and GlucoTalk (Axis-Shield)]. Ten percent to 20% incorrect reactions were seen with OneTouch Vita (LifeScan), ACCU-CHEK Aviva Nano (Roche), OmniTest+ (BBraun), and AlphaChek+ (Berger Med). More than 20% incorrect reactions were obtained with Pura (Ypsomed), GlucoCard Meter and GlucoMen LX (both Menarini), Elite (Bayer), and MediTouch (Medisana). In summary, partial and incomplete blood filling of glucose meter strips is often associated with inaccurate reading. These findings underline the importance of appropriate patient education on this aspect of blood glucose self-monitoring. © 2013 Diabetes Technology Society.
Partial volume correction of magnetic resonance spectroscopic imaging
NASA Astrophysics Data System (ADS)
Lu, Yao; Wu, Dee; Magnotta, Vincent A.
2007-03-01
The ability to study the biochemical composition of the brain is becoming important to better understand neurodegenerative and neurodevelopmental disorders. Magnetic Resonance Spectroscopy (MRS) can non-invasively provide quantification of brain metabolites in localized regions. The reliability of MRS is limited in part due to partial volume artifacts. This results from the relatively large voxels that are required to acquire sufficient signal-to-noise ratios for the studies. Partial volume artifacts result when a MRS voxel contains a mixture of tissue types. Concentrations of metabolites vary from tissue to tissue. When a voxel contains a heterogeneous tissue composition, the spectroscopic signal acquired from this voxel will consist of the signal from different tissues making reliable measurements difficult. We have developed a novel tool for the estimation of partial volume tissue composition within MRS voxels thus allowing for the correction of partial volume artifacts. In addition, the tool can localize MR spectra to anatomical regions of interest. The tool uses tissue classification information acquired as part of a structural MR scan for the same subject. The tissue classification information is co-registered with the spectroscopic data. The user can quantify the partial volume composition of each voxel and use this information as covariates for metabolite concentrations.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
Single-Isocenter Multiple-Target Stereotactic Radiosurgery: Risk of Compromised Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, Justin, E-mail: justin.roper@emory.edu; Department of Biostatistics and Bioinformatics, Winship Cancer Institute of Emory University, Atlanta, Georgia; Chanyavanich, Vorakarn
2015-11-01
Purpose: To determine the dosimetric effects of rotational errors on target coverage using volumetric modulated arc therapy (VMAT) for multitarget stereotactic radiosurgery (SRS). Methods and Materials: This retrospective study included 50 SRS cases, each with 2 intracranial planning target volumes (PTVs). Both PTVs were planned for simultaneous treatment to 21 Gy using a single-isocenter, noncoplanar VMAT SRS technique. Rotational errors of 0.5°, 1.0°, and 2.0° were simulated about all axes. The dose to 95% of the PTV (D95) and the volume covered by 95% of the prescribed dose (V95) were evaluated using multivariate analysis to determine how PTV coverage was relatedmore » to PTV volume, PTV separation, and rotational error. Results: At 0.5° rotational error, D95 values and V95 coverage rates were ≥95% in all cases. For rotational errors of 1.0°, 7% of targets had D95 and V95 values <95%. Coverage worsened substantially when the rotational error increased to 2.0°: D95 and V95 values were >95% for only 63% of the targets. Multivariate analysis showed that PTV volume and distance to isocenter were strong predictors of target coverage. Conclusions: The effects of rotational errors on target coverage were studied across a broad range of SRS cases. In general, the risk of compromised coverage increased with decreasing target volume, increasing rotational error and increasing distance between targets. Multivariate regression models from this study may be used to quantify the dosimetric effects of rotational errors on target coverage given patient-specific input parameters of PTV volume and distance to isocenter.« less
Ono, Tomohiro; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ono, Yuka; Ishigaki, Takashi; Hiraoka, Masahiro
2017-09-01
To estimate the lung tumor position from multiple anatomical features on four-dimensional computed tomography (4D-CT) data sets using single regression analysis (SRA) and multiple regression analysis (MRA) approach and evaluate an impact of the approach on internal target volume (ITV) for stereotactic body radiotherapy (SBRT) of the lung. Eleven consecutive lung cancer patients (12 cases) underwent 4D-CT scanning. The three-dimensional (3D) lung tumor motion exceeded 5 mm. The 3D tumor position and anatomical features, including lung volume, diaphragm, abdominal wall, and chest wall positions, were measured on 4D-CT images. The tumor position was estimated by SRA using each anatomical feature and MRA using all anatomical features. The difference between the actual and estimated tumor positions was defined as the root-mean-square error (RMSE). A standard partial regression coefficient for the MRA was evaluated. The 3D lung tumor position showed a high correlation with the lung volume (R = 0.92 ± 0.10). Additionally, ITVs derived from SRA and MRA approaches were compared with ITV derived from contouring gross tumor volumes on all 10 phases of the 4D-CT (conventional ITV). The RMSE of the SRA was within 3.7 mm in all directions. Also, the RMSE of the MRA was within 1.6 mm in all directions. The standard partial regression coefficient for the lung volume was the largest and had the most influence on the estimated tumor position. Compared with conventional ITV, average percentage decrease of ITV were 31.9% and 38.3% using SRA and MRA approaches, respectively. The estimation accuracy of lung tumor position was improved by the MRA approach, which provided smaller ITV than conventional ITV. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Effects of Regularisation Priors and Anatomical Partial Volume Correction on Dynamic PET Data
NASA Astrophysics Data System (ADS)
Caldeira, Liliana L.; Silva, Nuno da; Scheins, Jürgen J.; Gaens, Michaela E.; Shah, N. Jon
2015-08-01
Dynamic PET provides temporal information about the tracer uptake. However, each PET frame has usually low statistics, resulting in noisy images. Furthermore, PET images suffer from partial volume effects. The goal of this study is to understand the effects of prior regularisation on dynamic PET data and subsequent anatomical partial volume correction. The Median Root Prior (MRP) regularisation method was used in this work during reconstruction. The quantification and noise in image-domain and time-domain (time-activity curves) as well as the impact on parametric images is assessed and compared with Ordinary Poisson Ordered Subset Expectation Maximisation (OP-OSEM) reconstruction with and without Gaussian filter. This study shows the improvement in PET images and time-activity curves (TAC) in terms of noise as well as in the parametric images when using prior regularisation in dynamic PET data. Anatomical partial volume correction improves the TAC and consequently, parametric images. Therefore, the use of MRP with anatomical partial volume correction is of interest for dynamic PET studies.
Snider, James W; Mutaf, Yildirim; Nichols, Elizabeth; Hall, Andrea; Vadnais, Patrick; Regine, William F; Feigenberg, Steven J
2017-01-01
Accelerated partial breast irradiation has caused higher than expected rates of poor cosmesis. At our institution, a novel breast stereotactic radiotherapy device has demonstrated dosimetric distributions similar to those in brachytherapy. This study analyzed comparative dose distributions achieved with the device and intensity-modulated radiation therapy accelerated partial breast irradiation. Nine patients underwent computed tomography simulation in the prone position using device-specific immobilization on an institutional review board-approved protocol. Accelerated partial breast irradiation target volumes (planning target volume_10mm) were created per the National Surgical Adjuvant Breast and Bowel Project B-39 protocol. Additional breast stereotactic radiotherapy volumes using smaller margins (planning target volume_3mm) were created based on improved immobilization. Intensity-modulated radiation therapy and breast stereotactic radiotherapy accelerated partial breast irradiation plans were separately generated for appropriate volumes. Plans were evaluated based on established dosimetric surrogates of poor cosmetic outcomes. Wilcoxon rank sum tests were utilized to contrast volumes of critical structures receiving a percentage of total dose ( Vx). The breast stereotactic radiotherapy device consistently reduced dose to all normal structures with equivalent target coverage. The ipsilateral breast V20-100 was significantly reduced ( P < .05) using planning target volume_10mm, with substantial further reductions when targeting planning target volume_3mm. Doses to the chest wall, ipsilateral lung, and breast skin were also significantly lessened. The breast stereotactic radiotherapy device's uniform dosimetric improvements over intensity-modulated accelerated partial breast irradiation in this series indicate a potential to improve outcomes. Clinical trials investigating this benefit have begun accrual.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Delwiche, Stephen R; Reeves, James B
2010-01-01
In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
A novel scatter separation method for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-06-01
X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.
Prediction and error of baldcypress stem volume from stump diameter
Bernard R. Parresol
1998-01-01
The need to estimate the volume of removals occurs for many reasons, such as in trespass cases, severance tax reports, and post-harvest assessments. A logarithmic model is presented for prediction of baldcypress total stem cubic foot volume using stump diameter as the independent variable. Because the error of prediction is as important as the volume estimate, the...
Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S
2013-12-01
CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lemieux, Louis
2001-07-01
A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Automatic segmentation and reconstruction of the cortex from neonatal MRI.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Joseph V
2007-11-15
Segmentation and reconstruction of cortical surfaces from magnetic resonance (MR) images are more challenging for developing neonates than adults. This is mainly due to the dynamic changes in the contrast between gray matter (GM) and white matter (WM) in both T1- and T2-weighted images (T1w and T2w) during brain maturation. In particular in neonatal T2w images WM typically has higher signal intensity than GM. This causes mislabeled voxels during cortical segmentation, especially in the cortical regions of the brain and in particular at the interface between GM and cerebrospinal fluid (CSF). We propose an automatic segmentation algorithm detecting these mislabeled voxels and correcting errors caused by partial volume effects. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic expectation maximization (EM) scheme. Quantitative validation against manual segmentation demonstrates good performance (the mean Dice value: 0.758+/-0.037 for GM and 0.794+/-0.078 for WM). The inner, central and outer cortical surfaces are then reconstructed using implicit surface evolution. A landmark study is performed to verify the accuracy of the reconstructed cortex (the mean surface reconstruction error: 0.73 mm for inner surface and 0.63 mm for the outer). Both segmentation and reconstruction have been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. This preliminary analysis confirms previous findings that cortical surface area and curvature increase with age, and that surface area scales to cerebral volume according to a power law, while cortical thickness is not related to age or brain growth.
Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui
2018-06-01
Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.
Imai, Takashi; Kovalenko, Andriy; Hirata, Fumio
2005-04-14
The three-dimensional reference interaction site model (3D-RISM) theory is applied to the analysis of hydration effects on the partial molar volume of proteins. For the native structure of some proteins, the partial molar volume is decomposed into geometric and hydration contributions using the 3D-RISM theory combined with the geometric volume calculation. The hydration contributions are correlated with the surface properties of the protein. The thermal volume, which is the volume of voids around the protein induced by the thermal fluctuation of water molecules, is directly proportional to the accessible surface area of the protein. The interaction volume, which is the contribution of electrostatic interactions between the protein and water molecules, is apparently governed by the charged atomic groups on the protein surface. The polar atomic groups do not make any contribution to the interaction volume. The volume differences between low- and high-pressure structures of lysozyme are also analyzed by the present method.
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
NASA Astrophysics Data System (ADS)
Paul, M. Danish John; Shruthi, N.; Anantharaj, R.
2018-04-01
The derived thermodynamic properties like excess molar volume, partial molar volume, excess partial molar volume and apparent volume of binary mixture of acetic acid + n-butanolandacetic acid + water has been investigated using measured density of mixtures at temperatures from 293.15 K to 343.15.
Medical image segmentation based on SLIC superpixels model
NASA Astrophysics Data System (ADS)
Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya
2017-01-01
Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.
Low-energy pion-nucleon scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, W.R.; Ai, L.; Kaufmann, W.B.
An analysis of low-energy charged pion-nucleon data from recent {pi}{sup {plus_minus}}p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f{sup 2}=0.0756{plus_minus}0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P{sub 31} and P{sub 13} partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the {Sigma} term. Off-shell amplitudes are also provided. {copyright} {italmore » 1998} {ital The American Physical Society}« less
NASA Astrophysics Data System (ADS)
Kinnard, Lisa M.; Gavrielides, Marios A.; Myers, Kyle J.; Zeng, Rongping; Peregoy, Jennifer; Pritchard, William; Karanian, John W.; Petrick, Nicholas
2008-03-01
High-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that it is impacted by characteristics of the patient, the software tool and the CT system. The overall goal of this research is to quantify the various sources of measurement error and, when possible, minimize their effects. In the current study, we estimated nodule volume from ten repeat scans of an anthropomorphic phantom containing two synthetic spherical lung nodules (diameters: 5 and 10 mm; density: -630 HU), using a 16-slice Philips CT with 20, 50, 100 and 200 mAs exposures and 0.8 and 3.0 mm slice thicknesses. True volume was estimated from an average of diameter measurements, made using digital calipers. We report variance and bias results for volume measurements as a function of slice thickness, nodule diameter, and X-ray exposure.
NASA Astrophysics Data System (ADS)
Dingwell, Donald B.; Brearley, Mark
1988-12-01
The densities of 10 melts in the CaO-FeO-Fe 2O 3-SiO 2 system were determined in equilibrium with air, in the temperature range of 1200 to 1550°C, using the double-bob Archimedean technique. Melt compositions range from 6 to 58 wt% SiO 2, 14 to 76 wt% Fe 2O 3 and 10 to 46 wt% CaO. The ferric-ferrous ratios of glasses drop-quenched from loop fusion equilibration experiments were determined by 57Fe Mössbauer spectroscopy. Melt densities range from 2.689 to 3.618 gm/cm 3 with a mean standard deviation from replicate experiments of 0.15%. Least-squares regressions of molar volume versus molar composition have been performed and the root mean squared deviation shows that a linear combination of partial molar volumes for the oxide components (CaO, FeO, Fe 2O 3 and SiO 2) cannot describe the data set within experimental error. Instead, the inclusion of excess terms in CaFe 3+ and CaSi (product terms using the oxides) is required to yield a fit that describes the experimental data within error. The nonlinear compositional-dependence of the molar volumes of melts in this system can be explained by structural considerations of the roles of Ca and Fe 3+. The volume behavior of melts in this system is significantly different from that in the Na 2O-FeO-Fe 2O 3-SiO 2 system, consistent with the proposal that a proportion of Fe 3+ in melts in the CaO-FeO-Fe 2O 3-SiO 2 system is not tetrahedrally-coordinated by oxygen, which is supported by differences in 57Fe Mössbauer spectra of glasses. Specifically, this study confirms that the 57Fe Mössbauer spectra exhibit an area asymmetry and higher values of isomer shift of the ferric doublet that vary systematically with composition and temperature (this study; Dingwell and Virgo, 1987, 1988). These observations are consistent with a number of other lines of evidence ( e.g., homogeneous redox equilibria, Dickenson and Hess, 1986; viscosity, Dingwell and Virgo, 1987,1988). Two species of ferric iron, varying in proportions with temperature, composition and redox state, are sufficient to describe the above observations. The presence of more than one coordination geometry for Fe 3+ in low pressure silicate melts has several implications for igneous petrogenesis. The possible effects on compressibility, the pressure dependence of the redox ratio, and redox enthalpy are briefly noted.
Partial Molar Volumes of Aqua Ions from First Principles.
Wiktor, Julia; Bruneval, Fabien; Pasquarello, Alfredo
2017-08-08
Partial molar volumes of ions in water solution are calculated through pressures obtained from ab initio molecular dynamics simulations. The correct definition of pressure in charged systems subject to periodic boundary conditions requires access to the variation of the electrostatic potential upon a change of volume. We develop a scheme for calculating such a variation in liquid systems by setting up an interface between regions of different density. This also allows us to determine the absolute deformation potentials for the band edges of liquid water. With the properly defined pressures, we obtain partial molar volumes of a series of aqua ions in very good agreement with experimental values.
Residual volume on land and when immersed in water: effect on percent body fat.
Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu
2006-08-01
There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.
Leithner, Christoph; Füchtemeier, Martina; Jorks, Devi; Mueller, Susanne; Dirnagl, Ulrich; Royl, Georg
2015-11-01
Despite standardization of experimental stroke models, final infarct sizes after middle cerebral artery occlusion (MCAO) vary considerably. This introduces uncertainties in the evaluation of drug effects on stroke. Magnetic resonance imaging may detect variability of surgically induced ischemia before treatment and thus improve treatment effect evaluation. MCAO of 45 and 90 minutes induced brain infarcts in 83 mice. During, and 3 and 6 hours after MCAO, we performed multiparametric magnetic resonance imaging. We evaluated time courses of cerebral blood flow, apparent diffusion coefficient (ADC), T1, T2, accuracy of infarct prediction strategies, and impact on statistical evaluation of experimental stroke studies. ADC decreased during MCAO but recovered completely on reperfusion after 45 and partially after 90-minute MCAO, followed by a secondary decline. ADC lesion volumes during MCAO or at 6 hours after MCAO largely determined final infarct volumes for 90 but not for 45 minutes MCAO. The majority of chance findings of final infarct volume differences in random group allocations of animals were associated with significant differences in early ADC lesion volumes for 90, but not for 45-minute MCAO. The prediction accuracy of early magnetic resonance imaging for infarct volumes depends on timing of magnetic resonance imaging and MCAO duration. Variability of the posterior communicating artery in C57Bl6 mice contributes to differences in prediction accuracy between short and long MCAO. Early ADC imaging may be used to reduce errors in the interpretation of post MCAO treatment effects on stroke volumes. © 2015 American Heart Association, Inc.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
Xu, Z N; Wang, S Y
2015-02-01
To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K
2017-05-01
In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
NASA Astrophysics Data System (ADS)
Lerch, P.; Seifert, R.; Malfait, W. J.; Sanchez-Valle, C.
2012-12-01
Carbon dioxide is the second most abundant volatile in magmatic systems and plays an important role in many magmatic processes, e.g. partial melting, volatile saturation, outgassing. Despite this relevance, the volumetric properties of carbon-bearing silicates at relevant pressure and temperature conditions remain largely unknown because of considerable experimental difficulties associated with in situ measurements. Density and elasticity measurements on quenched glasses can provide an alternative source of information. For dissolved water, such measurements indicate that the partial molar volume is independent of compositions at ambient pressure [1], but the partial molar compressibility is not [2, 3]. Thus the partial molar volume of water may depend on melt composition at elevated pressure. For dissolved CO2, no such data is available. In order to constrain the effect of magma composition on the partial molar volume and compressibility of dissolved carbon, we determined the density and elasticity for three series of carbon-bearing basalt, phonolite and rhyolite glasses, quenched from 3.5 GPa and relaxed at ambient pressure. The CO2 content varies between 0 to 3.90 wt% depending on the glass composition. Glass densities were determined using the sink/float method in a diiodomethane (CH2I2) - acetone mixture. Brillouin measurements were conducted on relaxed and unrelaxed silicate glasses in platelet geometry to determine the compressional (VP) and shear (VS) wave velocities and elastic moduli. The partial molar volume of CO2 in rhyolite, phonolite and basalt glasses is 25.4 ± 0.9, 22.1 ± 0.6 and 26.6 ±1.8 cm3/mol, respectively. Thus, unlike for dissolved water, the partial molar volume of CO2 displays a resolvable compositional effect. Although the composition and CO2/carbonate speciation of the phonolite glasses is intermediate between that of the rhyolite and basalt glasses, the molar volume is not. Similar to dissolved water, the partial molar bulk modulus of CO2 displays a strong compositional effect. If these compositional dependencies persist in the analogue melts, the partial molar volume of dissolved CO2 will depend on melt composition, both at low and elevated pressure. Thus, for CO2-bearing melts, a full quantitative understanding of density dependent magmatic processes, such as crystal fractionation, magma mixing and melt extraction will require in situ measurements for a range of melt compositions. [1] Richet, P. et al., 2000, Contrib Mineral Petrol, 138, 337-347. [2] Malfait et al. 2011, Am. Mineral. 96, 1402-1409. [3] Whittington et al., 2012, Am. Mineral. 97, 455-467.
Wallon, G; Bonnet, A; Guérin, C
2013-06-01
Tidal volume (V(T)) must be accurately delivered by anaesthesia ventilators in the volume-controlled ventilation mode in order for lung protective ventilation to be effective. However, the impact of fresh gas flow (FGF) and lung mechanics on delivery of V(T) by the newest anaesthesia ventilators has not been reported. We measured delivered V(T) (V(TI)) from four anaesthesia ventilators (Aisys™, Flow-i™, Primus™, and Zeus™) on a pneumatic test lung set with three combinations of lung compliance (C, ml cm H2O(-1)) and resistance (R, cm H2O litre(-1) s(-2)): C60R5, C30R5, C60R20. For each CR, three FGF rates (0.5, 3, 10 litre min(-1)) were investigated at three set V(T)s (300, 500, 800 ml) and two values of PEEP (0 and 10 cm H2O). The volume error = [(V(TI) - V(Tset))/V(Tset)] ×100 was computed in body temperature and pressure-saturated conditions and compared using analysis of variance. For each CR and each set V(T), the absolute value of the volume error significantly declined from Aisys™ to Flow-i™, Zeus™, and Primus™. For C60R5, these values were 12.5% for Aisys™, 5% for Flow-i™ and Zeus™, and 0% for Primus™. With an increase in FGF, absolute values of the volume error increased only for Aisys™ and Zeus™. However, in C30R5, the volume error was minimal at mid-FGF for Aisys™. The results were similar at PEEP 10 cm H2O. Under experimental conditions, the volume error differed significantly between the four new anaesthesia ventilators tested and was influenced by FGF, although this effect may not be clinically relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing
2012-07-01
Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y; Fullerton, G; Goins, B
Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less
Li, Wei; Zhang, Xuan; Zheng, Kaiyi; Du, Yiping; Cap, Peng; Sui, Tao; Geng, Jinpei
2015-01-01
A fluidized bed enrichment technique was developed to improve sensitivity of near infrared (NIR) spectroscopy with features of rapidness and large volume solution. D301 resin was used as an adsorption material to preconcentrate β-naphthalenesulfonic acid in solutions in a concentration range of 2.0-100.0 μg/mL, and NIR spectra were measured directly relative to the β-naphthalenesulfonic acid adsorbed on the material. An improved partial least squares (PLS) model was attained with the aid of multiplicative scatter correction pretreatment and stability competitive adaptive reweighted sampling wavenumber selection method. The root mean square error of cross validation was 1.87 μg/mL at PLS factor of 7. An independent test set was used to assess the model, with the relative error (RE) in an acceptable range of 0.46 to 10.03% and mean RE of 3.72%. This study confirmed the viability of the proposed method for the measurement of a low content of β-naphthalenesulfonic acid in water.
The Kaon B-parameter in mixed action chiral perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubin, C.; /Columbia U.; Laiho, Jack
2006-09-01
We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At one-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an {Omicron}(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of {Omicron}(a{sup 2}).more » This term, however, is not strictly due to taste-breaking, and is therefore also present in the expression for B{sub K} for pure G-W lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less
Kaon B-parameter in mixed action chiral perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubin, C.; Laiho, Jack; Water, Ruth S. van de
2007-02-01
We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed-action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At 1-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an O(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of O(a{sup 2}). Thismore » term, however, is not strictly due to taste breaking, and is therefore also present in the expression for B{sub K} for pure Ginsparg-Wilson lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less
Möhlhenrich, Stephan Christian; Heussen, Nicole; Peters, Florian; Steiner, Timm; Hölzle, Frank; Modabber, Ali
2015-11-01
The morphometric analysis of maxillary sinus was recently presented as a helpful instrument for sex determination. The aim of the present study was to examine the volume and surface of the fully dentate, partial, and complete edentulous maxillary sinus depending on the sex. Computed tomography data from 276 patients were imported in DICOM format via special virtual planning software, and surfaces (mm) and volumes (mm) of maxillary sinuses were measured. In sex-specific comparisons (women vs men), statistically significant differences for the mean maxillary sinus volume and surface were found between fully dentate (volume, 13,267.77 mm vs 16,623.17 mm, P < 0.0001; surface, 3480.05 mm vs 4100.83 mm, P < 0.0001) and partially edentulous (volume, 10,577.35 mm vs 14,608.10 mm, P = 0.0002; surface, 2980.11 mm vs 3797.42 mm, P < 0.0001) or complete edentulous sinuses (volume, 11,200.99 mm vs 15,382.29 mm, P < 0.0001; surface, 3118.32 mm vs 3877.25 mm, P < 0.0001). For males, the statistically different mean values were calculated between fully dentate and partially edentulous (volume, P = 0.0022; surface, P = 0.0048) maxillary sinuses. Between the sexes, no differences were only measured for female and male partially dentate fully edentulous sinuses (2 teeth missing) and between partially edentulous sinuses in women and men (1 teeth vs 2 teeth missing). With a corresponding software program, it is possible to analyze the maxillary sinus precisely. The dentition influences the volume and surface of the pneumatic maxillary sinus. Therefore, sex determination is possible by analysis of the maxillary sinus event through the increase in pneumatization.
Partial harvesting of hardwood sawtimber in Kentucky and Tennessee, 2002–2014
Thomas J. Brandeis
2017-01-01
Partial harvesting is the predominant but not exclusive cutting treatment applied to the hardwood forests of Kentucky and Tennessee. Hardwood harvest in Kentucky showed a slight downward trend from 2006 to 2014, with most of the volume harvested in partial logging operations. Tennessee did not show this same downward trend, and the amount of hardwood volume harvested...
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Dörnberger, V; Dörnberger, G
1987-01-01
On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.
Acea Nebril, B; Gómez Freijoso, C
1997-03-01
To determine the accuracy of bibliographic citation in Revista Española de Enfermedades Digestivas (REED) and compare it with other Spanish and international journals. We reviewed all 1995 volumes of the REED and randomly selected 100 references from these volumes. Nine citations of non-journal articles were excluded and the remaining 91 citations were carefully scrutinized. Each original article was compared for author's name, title of article, name of journal, volume number, year of publication and pages. Some type of error was detected in 61.6% of the references and on 3 occasions (3.3%) impeded finding to the original article. Errors were found in authors (37.3%); article title (16.4%), pages (6.6%), journal title (4.4%), volume (2.2%) and year (1%). A single error was found in 42 citations, 2 were found in 13 and 3 were found in 1. REED's rate of error in references is comparable to the rates of other spanish and international journals. Authors should exercise more care in preparing bibliographies and should invest more effort in verification of quoted references.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1979-01-01
The partial coherence analysis method for noise source/path determination is summarized and the application to a two input, single output system with coherence between the inputs is illustrated. The augmentation of the calculations on a digital computer interfaced with a two channel, real time analyzer is also discussed. The results indicate possible sources of error in the computations and suggest procedures for avoiding these errors.
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293
New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.
Measuring Parameters of Massive Black Hole Binaries with Partially-Aligned Spins
NASA Technical Reports Server (NTRS)
Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.
2010-01-01
It is important to understand how well the gravitational-wave observatory LISA can measure parameters of massive black hole binaries. It has been shown that including spin precession in the waveform breaks degeneracies and produces smaller expected parameter errors than a simpler, precession-free analysis. However, recent work has shown that gas in binaries can partially align the spins with the orbital angular momentum, thus reducing the precession effect. We show how this degrades the earlier results, producing more pessimistic errors in gaseous mergers. However, we then add higher harmonics to the signal model; these also break degeneracies, but they are not affected by the presence of gas. The harmonics often restore the errors in partially-aligned binaries to the same as, or better than/ those that are obtained for fully precessing binaries with no harmonics. Finally, we investigate what LISA measurements of spin alignment can tell us about the nature of gas around a binary,
Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B
2017-10-01
In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P < .0018). The exploratory analysis revealed no significant difference in the standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.
Error-related brain activity and error awareness in an error classification paradigm.
Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E
2016-10-01
Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.
Improvement in mood and ideation associated with increase in right caudate volume.
Starkman, Monica N; Giordani, Bruno; Gebarski, Stephen S; Schteingart, David E
2007-08-01
The basal ganglia, particularly caudate, are hypothesized to play a role in affective and obsessive-compulsive disorders. The depressive syndrome is a feature of untreated Cushing's disease. The objective of this study was to test the hypothesis that after treatment of Cushing's disease reduces elevated cortisol, improvement in mood and related ideations are associated with increase in caudate volume. In this longitudinal, interventional study of 23 patients with Cushing's disease, 24-hour urinary free cortisol, structural magnetic resonance imaging and behavioral measures were obtained prior to treatment and approximately one year after pituitary microadenomectomy. Five SCL-90-R subscales measuring change in mood, related ideations and physical symptoms were utilized. Partial correlations (adjusted for age and time since surgery) showed change in caudate, but not hippocampal, volume was significantly associated with change in behavioral SCL-90-R subscales, indicating selectivity for structure. Right but not left caudate showed associations, suggesting selectivity for lateralization. Right caudate volume increase was significantly associated with decreases in Depression, Anxiety, Obsessive-Compulsive, and Paranoid scores, but not with Somatization (physical symptoms), indicating specificity for behavioral but not physical variables. A limitation is that relatively low-resolution scans were utilized. Although most likely not diminishing the significant findings, less sensitive methodology could lead to an increased probability of a type 2 error. These findings support the concept that caudate, and likely right caudate, participates in human brain circuitry regulating mood.
Underestimation of Low-Dose Radiation in Treatment Planning of Intensity-Modulated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Si Young; Liu, H. Helen; Mohan, Radhe
2008-08-01
Purpose: To investigate potential dose calculation errors in the low-dose regions and identify causes of such errors for intensity-modulated radiotherapy (IMRT). Methods and Materials: The IMRT treatment plans of 23 patients with lung cancer and mesothelioma were reviewed. Of these patients, 15 had severe pulmonary complications after radiotherapy. Two commercial treatment-planning systems (TPSs) and a Monte Carlo system were used to calculate and compare dose distributions and dose-volume parameters of the target volumes and critical structures. The effect of tissue heterogeneity, multileaf collimator (MLC) modeling, beam modeling, and other factors that could contribute to the differences in IMRT dose calculationsmore » were analyzed. Results: In the commercial TPS-generated IMRT plans, dose calculation errors primarily occurred in the low-dose regions of IMRT plans (<50% of the radiation dose prescribed for the tumor). Although errors in the dose-volume histograms of the normal lung were small (<5%) above 10 Gy, underestimation of dose <10 Gy was found to be up to 25% in patients with mesothelioma or large target volumes. These errors were found to be caused by inadequate modeling of MLC transmission and leaf scatter in commercial TPSs. The degree of low-dose errors depends on the target volumes and the degree of intensity modulation. Conclusions: Secondary radiation from MLCs contributes a significant portion of low dose in IMRT plans. Dose underestimation could occur in conventional IMRT dose calculations if such low-dose radiation is not properly accounted for.« less
Misawa, M; Inamura, Y; Hosaka, D; Yamamuro, O
2006-08-21
Quasielastic neutron scattering measurements have been made for 1-propanol-water mixtures in a range of alcohol concentration from 0.0 to 0.167 in mole fraction at 25 degrees C. Fraction alpha of water molecules hydrated to fractal surface of alcohol clusters in 1-propanol-water mixture was obtained as a function of alcohol concentration. Average hydration number N(ws) of 1-propanol molecule is derived from the value of alpha as a function of alcohol concentration. By extrapolating N(ws) to infinite dilution, we obtain values of 12-13 as hydration number of isolated 1-propanol molecule. A simple interpretation of structural origin of anomalous excess partial molar volume of water is proposed and as a result a simple equation for the excess partial molar volume is deduced in terms of alpha. Calculated values of the excess partial molar volumes of water and 1-propanol and the excess molar volume of the mixture are in good agreement with experimental values.
NASA Technical Reports Server (NTRS)
Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi
2013-01-01
This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.
NASA Technical Reports Server (NTRS)
Neumann, Maxim; Hensley, Scott; Lavalle, Marco; Ahmed, Razi
2013-01-01
This paper concerns forest remote sensing using JPL's multi-baseline polarimetric interferometric UAVSAR data. It presents exemplary results and analyzes the possibilities and limitations of using SAR Tomography and Polarimetric SAR Interferometry (PolInSAR) techniques for the estimation of forest structure. Performance and error indicators for the applicability and reliability of the used multi-baseline (MB) multi-temporal (MT) PolInSAR random volume over ground (RVoG) model are discussed. Experimental results are presented based on JPL's L-band repeat-pass polarimetric interferometric UAVSAR data over temperate and tropical forest biomes in the Harvard Forest, Massachusetts, and in the La Amistad Park, Panama and Costa Rica. The results are partially compared with ground field measurements and with air-borne LVIS lidar data.
NASA Astrophysics Data System (ADS)
Deosarkar, S. D.; Tawde, P. D.; Zinjade, A. B.; Shaikh, A. I.
2015-09-01
Density (ρ) and viscosity (η) of aqueous hippuric acid (HA) solutions containing LiCl and MnCl2 · 4H2O have been studied at 303.15 K in order to understand volumetric and viscometric behavior of these systems. Apparent molar volume (φv) of salts were calculated from density data and fitted to Massons relation and partial molar volumes (φ{v/0}) at infinite dilution were determined. Relative viscosity data has been used to determine viscosity A and B coefficients using Jones-Dole relation. Partial molar volume and viscosity coefficients have been discussed in terms of ion-solvent interactions and overall structural fittings in solution.
NASA Astrophysics Data System (ADS)
Bijnens, Johan; Relefors, Johan
2017-12-01
We calculate vector-vector correlation functions at two loops using partially quenched chiral perturbation theory including finite volume effects and twisted boundary conditions. We present expressions for the flavor neutral cases and the flavor charged case with equal masses. Using these expressions we give an estimate for the ratio of disconnected to connected contributions for the strange part of the electromagnetic current. We give numerical examples for the effects of partial quenching, finite volume and twisting and suggest the use of different twists to check the size of finite volume effects. The main use of this work is expected to be for lattice QCD calculations of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment.
Mikkelsen, Mark; Singh, Krish D; Brealy, Jennifer A; Linden, David E J; Evans, C John
2016-11-01
The quantification of γ-aminobutyric acid (GABA) concentration using localised MRS suffers from partial volume effects related to differences in the intrinsic concentration of GABA in grey (GM) and white (WM) matter. These differences can be represented as a ratio between intrinsic GABA in GM and WM: r M . Individual differences in GM tissue volume can therefore potentially drive apparent concentration differences. Here, a quantification method that corrects for these effects is formulated and empirically validated. Quantification using tissue water as an internal concentration reference has been described previously. Partial volume effects attributed to r M can be accounted for by incorporating into this established method an additional multiplicative correction factor based on measured or literature values of r M weighted by the proportion of GM and WM within tissue-segmented MRS volumes. Simulations were performed to test the sensitivity of this correction using different assumptions of r M taken from previous studies. The tissue correction method was then validated by applying it to an independent dataset of in vivo GABA measurements using an empirically measured value of r M . It was shown that incorrect assumptions of r M can lead to overcorrection and inflation of GABA concentration measurements quantified in volumes composed predominantly of WM. For the independent dataset, GABA concentration was linearly related to GM tissue volume when only the water signal was corrected for partial volume effects. Performing a full correction that additionally accounts for partial volume effects ascribed to r M successfully removed this dependence. With an appropriate assumption of the ratio of intrinsic GABA concentration in GM and WM, GABA measurements can be corrected for partial volume effects, potentially leading to a reduction in between-participant variance, increased power in statistical tests and better discriminability of true effects. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
Prediction of stream volatilization coefficients
Rathbun, Ronald E.
1990-01-01
Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.
Note: Nonpolar solute partial molar volume response to attractive interactions with water.
Williams, Steven M; Ashbaugh, Henry S
2014-01-07
The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.
Note: Nonpolar solute partial molar volume response to attractive interactions with water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Steven M.; Ashbaugh, Henry S., E-mail: hanka@tulane.edu
2014-01-07
The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Ali, Anwar; Ansari, Sana; Uzair, Sahar; Tasneem, Shadma; Nabi, Firdosa
2015-11-01
Densities ρ and ultrasonic speeds u for pure diethylene glycol, 1-butanol, 2-butanol, and 1,4-butanediol and for their binary mixtures over the entire composition range were measured at 298.15 K, 303.15 K, 308.15 K, and 313.15 K. Using these data, the excess molar volumes, VE_m, deviations in isentropic compressibilities, {\\varDelta }ks, apparent molar volumes, V_{φi} , partial molar volumes, overline{V}_{m,i} , and excess partial molar volumes, overline{V}_{m,i}^E , have been calculated over the entire composition range, and also the excess partial molar volumes of the components at infinite dilution, overline{V}_{m,i}^{E,infty } have been calculated. The excess functions have been correlated using the Redlich-Kister equation at different temperatures. The variations of these derived parameters with composition and temperature are presented graphically.
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-01-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier. PMID:17660257
Imai, Takashi; Ohyama, Shusaku; Kovalenko, Andriy; Hirata, Fumio
2007-09-01
The partial molar volume (PMV) change associated with the pressure-induced structural transition of ubiquitin is analyzed by the three-dimensional reference interaction site model (3D-RISM) theory of molecular solvation. The theory predicts that the PMV decreases upon the structural transition, which is consistent with the experimental observation. The volume decomposition analysis demonstrates that the PMV reduction is primarily caused by the decrease in the volume of structural voids in the protein, which is partially canceled by the volume expansion due to the hydration effects. It is found from further analysis that the PMV reduction is ascribed substantially to the penetration of water molecules into a specific part of the protein. Based on the thermodynamic relation, this result implies that the water penetration causes the pressure-induced structural transition. It supports the water penetration model of pressure denaturation of proteins proposed earlier.
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
2014-06-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less
A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus
NASA Astrophysics Data System (ADS)
Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.
2017-11-01
The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Density contrast sedimentation velocity for the determination of protein partial-specific volumes.
Brown, Patrick H; Balbo, Andrea; Zhao, Huaying; Ebel, Christine; Schuck, Peter
2011-01-01
The partial-specific volume of proteins is an important thermodynamic parameter required for the interpretation of data in several biophysical disciplines. Building on recent advances in the use of density variation sedimentation velocity analytical ultracentrifugation for the determination of macromolecular partial-specific volumes, we have explored a direct global modeling approach describing the sedimentation boundaries in different solvents with a joint differential sedimentation coefficient distribution. This takes full advantage of the influence of different macromolecular buoyancy on both the spread and the velocity of the sedimentation boundary. It should lend itself well to the study of interacting macromolecules and/or heterogeneous samples in microgram quantities. Model applications to three protein samples studied in either H(2)O, or isotopically enriched H(2) (18)O mixtures, indicate that partial-specific volumes can be determined with a statistical precision of better than 0.5%, provided signal/noise ratios of 50-100 can be achieved in the measurement of the macromolecular sedimentation velocity profiles. The approach is implemented in the global modeling software SEDPHAT.
Partial compensation interferometry measurement system for parameter errors of conicoid surface
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo
2018-06-01
Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Efficient automatic OCR word validation using word partial format derivation and language model
NASA Astrophysics Data System (ADS)
Chen, Siyuan; Misra, Dharitri; Thoma, George R.
2010-01-01
In this paper we present an OCR validation module, implemented for the System for Preservation of Electronic Resources (SPER) developed at the U.S. National Library of Medicine.1 The module detects and corrects suspicious words in the OCR output of scanned textual documents through a procedure of deriving partial formats for each suspicious word, retrieving candidate words by partial-match search from lexicons, and comparing the joint probabilities of N-gram and OCR edit transformation corresponding to the candidates. The partial format derivation, based on OCR error analysis, efficiently and accurately generates candidate words from lexicons represented by ternary search trees. In our test case comprising a historic medico-legal document collection, this OCR validation module yielded the correct words with 87% accuracy and reduced the overall OCR word errors by around 60%.
Hori, Masatoshi; Suzuki, Kenji; Epstein, Mark L.; Baron, Richard L.
2011-01-01
The purpose was to evaluate a relationship between slice thickness and calculated volume on CT liver volumetry by comparing the results for images with various slice thicknesses including three-dimensional images. Twenty adult potential liver donors (12 men, 8 women; mean age, 39 years; range, 24–64) underwent CT with a 64-section multi-detector row CT scanner after intra-venous injection of contrast material. Four image sets with slice thicknesses of 0.625 mm, 2.5 mm, 5 mm, and 10 mm were used. First, a program developed in our laboratory for automated liver extraction was applied to CT images, and the liver boundary was obtained automatically. Then, an abdominal radiologist reviewed all images on which automatically extracted boundaries were superimposed, and edited the boundary on each slice to enhance the accuracy. Liver volumes were determined by counting of the voxels within the liver boundary. Mean whole liver volumes estimated with CT were 1322.5 cm3 on 0.625-mm, 1313.3 cm3 on 2.5-mm, 1310.3 cm3 on 5-mm, and 1268.2 cm3 on 10-mm images. Volumes calculated for three-dimensional (0.625-mm-thick) images were significantly larger than those for thicker images (P<.0001). Partial liver volumes of right lobe, left lobe, and lateral segment were also evaluated in a similar manner. Estimated maximum differences in calculated volumes of lateral segment was −10.9 cm3 (−4.6%) between 0.625-mm and 5-mm images. In conclusion, liver volumes calculated on 2.5-mm or thicker images were significantly smaller than volumes calculated on three-dimensional images. If a maximum error of 5% in the calculated graft volume is within the range of having an insignificant clinical impact, 5-mm thick images are acceptable for CT volumetry. If not, three-dimensional images could be essential. PMID:21850689
NASA Astrophysics Data System (ADS)
Dikkar, A. B.; Pethe, G. B.; Aswar, A. S.
2016-02-01
The speed of sound ( u), density (ρ), and viscosity (η) of 2,4-dihydroxyacetophenone isonicotinoylhydrazone (DHAIH) have been measured in N, N-dimethyl formamide and dimethyl sulfoxide at equidistance temperatures 298.15, 303.15, 308.15, and 313.15 K. These data were used to calculate some important ultrasonic and thermodynamic parameters such as apparent molar volume ( V ϕ s st ), apparent molar compressibility ( K ϕ), partial molar volume ( V ϕ 0 ) and partial molar compressibility ( K ϕ 0 ), were estimated by using the values of ( V ϕ 0 ) and ( K ϕ), at infinite dilution. Partial molar expansion at infinite dilution, (ϕ E 0 ) has also been calculated from temperature dependence of partial molar volume V ϕ 0 . The viscosity data have been analyzed using the Jones-Dole equation, and the viscosity, B coefficients are calculated. The activation free energy has been calculated from B coefficients and partial molar volume data. The results have been discussed in the term of solute-solvent interaction occurring in solutions and it was found that DHAIH acts as a structure maker in present systems.
SU-E-T-619: Comparison of CyberKnife Versus HDR (SAVI) for Partial Breast Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooij, R; Ding, X; Nagda, S
2014-06-15
Purpose: Compare SAVI plans and CyberKnife (CK) plans for the same accelerated course. Methods and Materials: Three SAVI patients were selected. Pre-SAVI CTs were used for CK planning. All prescriptions are 3400cGy in 10 fractions BID. Max dose to skin and chestwall is 425cGy. For SAVI, PTV is a 1cm expansion of the cavity minus the cavity. For CK, CTV is a 1cm expansion of the seroma, with 2mm margin. CK plans are normalized to SAVI, so that in both cases the 323cGy isodose line covers the same percentage of PTV. For CK Fiducial/Synchrony tracking is used. Results: In themore » following, all doses are per fraction and results are averaged. The PTVs for the CK plans are 2.4 times larger than the corresponding SAVI PTVs. Nonetheless the CK plans meet all constraints and are superior to SAVI plans in several respects. Max skin dose for SAVI vs CK is 332cGy vs 337cGy. Max dose to chestwall is 252cGy vs 286cGy. The volume of lung over 125cGy is 6.4cc for SAVI and 2.5cc for CK. Max heart dose is 60cGy for SAVI and 83cGy for CK. The volume of PTV receiving over 425cGy is 49cc for SAVI and 1.3cc for CK. Max dose to contra-lateral breast is 16cGy for SAVI and 4.5cGy for CK. Conclusion: CK PTVs are directly derived from the seroma. Corresponding SAVI PTVs tend to be much smaller. Dosimetrically, CK plans are equivalent or superior to SAVI plans despite the larger PTVs. Interestingly, the dose delivered to the lung is higher in SAVI vs CK. Fiducial/Synchrony tracking employed by CK might reduce errors in delivery compared to errors associated with shifts of the SAVI implant. In conclusion, when CK is an option for partial breast irradiation it may preferable to SAVI.« less
An extension of the receiver operating characteristic curve and AUC-optimal classification.
Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto
2012-10-01
While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.
Measurement variability error for estimates of volume change
James A. Westfall; Paul L. Patterson
2007-01-01
Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...
NASA Astrophysics Data System (ADS)
Zhang, Xiaolin; Mao, Mao; Yin, Yan; Wang, Bin
2018-01-01
This study numerically evaluates the effects of aerosol microphysics, including coated volume fraction of black carbon (BC), shell/core ratio, and size distribution, on the absorption enhancement (
Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.
Cohen, Michael X; van Gaal, Simon
2014-02-01
We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment. © 2013.
Factors that influence the generation of autobiographical memory conjunction errors
Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose
2015-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492
Factors that influence the generation of autobiographical memory conjunction errors.
Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose
2016-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
NASA Astrophysics Data System (ADS)
Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.
1994-05-01
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
Modi, Shilpi; Bhattacharya, Manisha; Singh, Namita; Tripathi, Rajendra Prasad; Khushu, Subash
2012-10-01
To investigate structural reorganization in the brain with differential visual experience using Voxel-Based Morphometry with Diffeomorphic Anatomic Registration Through Exponentiated Lie algebra algorithm (DARTEL) approach. High resolution structural MR images were taken in fifteen normal sighted healthy controls, thirteen totally blind subjects and six partial blind subjects. The analysis was carried out using SPM8 software on MATLAB 7.6.0 platform. VBM study revealed gray matter volume atrophy in the cerebellum and left inferior parietal cortex in total blind subjects and in left inferior parietal cortex, right caudate nucleus, and left primary visual cortex in partial blind subjects as compared to controls. White matter volume loss was found in calcarine gyrus in total blind subjects and Thlamus-somatosensory region in partially blind subjects as compared to controls. Besides, an increase in Gray Matter volume was also found in left middle occipital and middle frontal gyrus and right entorhinal cortex, and an increase in White Matter volume was found in superior frontal gyrus, left middle temporal gyrus and right Heschl's gyrus in totally blind subjects as compared to controls. Comparison between total and partial blind subjects revealed a greater Gray Matter volume in left cerebellum of partial blinds and left Brodmann area 18 of total blind subjects. Results suggest that, loss of vision at an early age can induce significant structural reorganization on account of the loss of visual input. These plastic changes are different in early onset of total blindness as compared to partial blindness. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Novikov, A. N.; Doronin, Ya. I.; Rakhmanova, P. A.
2018-07-01
The heat capacities and volumes of dimethylsulfoxide (DMSO) solutions of barium and cadmium iodides at 298.15 K were measured by calorimetry and densimetry. The standard partial molar heat capacities \\bar C_{p,2}^° and volumes \\bar V2^° of BaI2 and CdI2 in DMSO were calculated. The standard heat capacities \\bar C_{p,i}^° and volumes \\bar {V}i^° of barium and cadmium ions in DMSO at 298.15 K were determined.
Phobos: Observed bulk properties
NASA Astrophysics Data System (ADS)
Pätzold, Martin; Andert, Tom; Jacobson, Robert; Rosenblatt, Pascal; Dehant, Véronique
2014-11-01
This work is a review of the mass determinations of the Mars moon Phobos by spacecraft close flybys, by solving for the Martian gravity field and by the analysis of secular orbit perturbations. The absolute value and accuracy is sensitive on the knowledge and accuracy of the Phobos ephemeris, of the spacecraft orbit, other perturbing forces acting on the spacecraft and the resolution of the Martian gravity field besides the measurement accuracy of the radio tracking data. The mass value and its error improved from spacecraft mission to mission or from the modern analysis of “old” tracking data but these solutions depend on the accuracy of the ephemeris at the time of observation. The mass value seems to settle within the range of GMPh=(7.11±0.09)×10-4 km3 s-2 which covers almost all mass values from close flybys and “distant” encounters within its 3-σ error (1.5%). Using the volume value determined from MEX HRSC imaging, the bulk density is (1873±31) kg m-3 (3-σ error or 1.7%), a low value which suggests that Phobos is either highly porous, is composed partially of light material or both. The determination of the gravity coefficients C20 and C22 from the Mars Express 2010 close flyby does not allow to draw conclusion on the internal structure. The large errors do not distinguish whether Phobos is homogeneous or not. In view of theories of the Phobos' origin, one possibility is that Phobos is not a captured asteroid but accreted from a debris disk in Mars orbit as a second generation solar system object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinyard, Natalia Sergeevna; Perry, Theodore Sonne; Usov, Igor Olegovich
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk =more » $$\\partial k$$\\ $$\\partial T$$ ΔT + $$\\partial k$$\\ $$\\partial (pL)$$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B 0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB 0/B 0, and consequently Δk/k = 1/T (ΔB/B + ΔB$$_0$$/B$$_0$$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2« less
Forecasting the brittle failure of heterogeneous, porous geomaterials
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian; Heap, Michael; Main, Ian; Lavallée, Yan; Dingwell, Donald
2017-04-01
Heterogeneity develops in magmas during ascent and is dominated by the development of crystal and importantly, bubble populations or pore-network clusters which grow, interact, localize, coalesce, outgas and resorb. Pore-scale heterogeneity is also ubiquitous in sedimentary basin fill during diagenesis. As a first step, we construct numerical simulations in 3D in which randomly generated heterogeneous and polydisperse spheres are placed in volumes and which are permitted to overlap with one another, designed to represent the random growth and interaction of bubbles in a liquid volume. We use these simulated geometries to show that statistical predictions of the inter-bubble lengthscales and evolving bubble surface area or cluster densities can be made based on fundamental percolation theory. As a second step, we take a range of well constrained random heterogeneous rock samples including sandstones, andesites, synthetic partially sintered glass bead samples, and intact glass samples and subject them to a variety of stress loading conditions at a range of temperatures until failure. We record in real time the evolution of the number of acoustic events that precede failure and show that in all scenarios, the acoustic event rate accelerates toward failure, consistent with previous findings. Applying tools designed to forecast the failure time based on these precursory signals, we constrain the absolute error on the forecast time. We find that for all sample types, the error associated with an accurate forecast of failure scales non-linearly with the lengthscale between the pore clusters in the material. Moreover, using a simple micromechanical model for the deformation of porous elastic bodies, we show that the ratio between the equilibrium sub-critical crack length emanating from the pore clusters relative to the inter-pore lengthscale, provides a scaling for the error on forecast accuracy. Thus for the first time we provide a potential quantitative correction for forecasting the failure of porous brittle solids that build the Earth's crust.
Kim, Dae Keun; Jang, Yujin; Lee, Jaeseon; Hong, Helen; Kim, Ki Hong; Shin, Tae Young; Jung, Dae Chul; Choi, Young Deuk; Rha, Koon Ho
2015-12-01
To analyze long-term changes in both kidneys, and to predict renal function and contralateral hypertrophy after robot-assisted partial nephrectomy. A total of 62 patients underwent robot-assisted partial nephrectomy, and renal parenchymal volume was calculated using three-dimensional semi-automatic segmentation technology. Patients were evaluated within 1 month preoperatively, and postoperatively at 6 months, 1 year and continued up to 2-year follow up. Linear regression models were used to identify the factors predicting variables that correlated with estimated glomerular filtration rate changes and contralateral hypertrophy 2 years after robot-assisted partial nephrectomy. The median global estimated glomerular filtration rate changes were -10.4%, -11.9%, and -2.4% at 6 months, 1 and 2 years post-robot-assisted partial nephrectomy, respectively. The ipsilateral kidney median parenchymal volume changes were -24%, -24.4%, and -21% at 6 months, 1 and 2 years post-robot-assisted partial nephrectomy, respectively. The contralateral renal volume changes were 2.3%, 9.6% and 12.9%, respectively. On multivariable linear analysis, preoperative estimated glomerular filtration rate was the best predictive factor for global estimated glomerular filtration rate change on 2 years post-robot-assisted partial nephrectomy (B -0.452; 95% confidence interval -0.84 to -0.14; P = 0.021), whereas the parenchymal volume loss rate (B -0.43; 95% confidence interval -0.89 to -0.15; P = 0.017) and tumor size (B 5.154; 95% confidence interval -0.11 to 9.98; P = 0.041) were the significant predictive factors for the degree of contralateral renal hypertrophy on 2 years post-robot-assisted partial nephrectomy. Preoperative estimated glomerular filtration rate significantly affects post-robot-assisted partial nephrectomy renal function. Renal mass size and renal parenchyma volume loss correlates with compensatory hypertrophy of the contralateral kidney. Contralateral hypertrophy of the renal parenchyma compensates for the functional loss of the ipsilateral kidney. © 2015 The Japanese Urological Association.
Kong, Chang Yi; Siratori, Tomoya; Funazukuri, Toshitaka; Wang, Guosheng
2014-10-03
The effects of temperature and density on retention of platinum(II) 2,4-pentanedionate in supercritical fluid chromatography were investigated at temperatures of 308.15-343.15K and pressure range from 8 to 40MPa by the chromatographic impulse response method with curve fitting. The retention factors were utilized to derive the infinite dilution partial molar volumes of platinum(II) 2,4-pentanedionate in supercritical carbon dioxide. The determined partial molar volumes were small and positive at high pressures but exhibited very large and negative values in the highly compressible near critical region of carbon dioxide. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Bing-Fang; Yuan, Li-Bo; Kong, Qing-Ming; Shen, Wei-Zheng; Zhang, Bing-Xiu; Liu, Cheng-Hai
2014-10-01
In the present study, a new method using near infrared spectroscopy combined with optical fiber sensing technology was applied to the analysis of hogwash oil in blended oil. The 50 samples were a blend of frying oil and "nine three" soybean oil according to a certain volume ratio. The near infrared transmission spectroscopies were collected and the quantitative analysis model of frying oil was established by partial least squares (PLS) and BP artificial neural network The coefficients of determina- tion of calibration sets were 0.908 and 0.934 respectively. The coefficients of determination of validation sets were 0.961 and 0.952, the root mean square error of calibrations (RMSEC) was 0.184 and 0.136, and the root mean square error of predictions (RMSEP) was all 0.111 6. They conform to the model application requirement. At the same time, frying oil and qualified edible oil were identified with the principal component analysis (PCA), and the accurate rate was 100%. The experiment proved that near infrared spectral technology not only can quickly and accurately identify hogwash oil, but also can quantitatively detect hog- wash oil. This method has a wide application prospect in the detection of oil.
Measurement of steep aspheric surfaces using improved two-wavelength phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Zhang, Liqiong; Wang, Shaopu; Hu, Yao; Hao, Qun
2017-10-01
Optical components with aspheric surfaces can improve the imaging quality of optical systems, and also provide extra advantages such as lighter weight, smaller volume and simper structure. In order to satisfy these performance requirements, the surface error of aspheric surfaces, especially high departure aspheric surfaces must be measured accurately and conveniently. The major obstacle of traditional null-interferometry for aspheric surface under test is that specific and complex null optics need to be designed to fully compensate for the normal aberration of the aspheric surface under test. However, non-null interferometry partially compensating for the aspheric normal aberration can test aspheric surfaces without specific null optics. In this work, a novel non-null test approach of measuring the deviation between aspheric surfaces and the best reference sphere by using improved two-wavelength phase shifting interferometer is described. With the help of the calibration based on reverse iteration optimization, we can effectively remove the retrace error and thus improve the accuracy. Simulation results demonstrate that this method can measure the aspheric surface with the departure of over tens of microns from the best reference sphere, which introduces approximately 500λ of wavefront aberration at the detector.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, C.; Beardsley, R. C.; Gao, G.; Qi, J.; Lin, H.
2016-02-01
A high-resolution (up to 2 km), unstructured-grid, fully ice-sea coupled Arctic Ocean Finite-Volume Community Ocean Model (AO-FVCOM) was used to simulate the Arctic sea ice over the period 1978-2014. Good agreements were found between simulated and observed sea ice extent, concentration, drift velocity and thickness, indicating that the AO-FVCOM captured not only the seasonal and interannual variability but also the spatial distribution of the sea ice in the Arctic in the past 37 years. Compared with other six Arctic Ocean models (ECCO2, GSFC, INMOM, ORCA, NAME and UW), the AO-FVCOM-simulated ice thickness showed a higher correlation coefficient and a smaller difference with observations. An effort was also made to examine the physical processes attributing to the model-produced bias in the sea ice simulation. The error in the direction of the ice drift velocity was sensitive to the wind turning angle; smaller when the wind was stronger, but larger when the wind was weaker. This error could lead to the bias in the near-surface current in the fully or partially ice-covered zone where the ice-sea interfacial stress was a major driving force.
NASA Astrophysics Data System (ADS)
Hasegawa, Bruce; Tang, H. Roger; Da Silva, Angela J.; Wong, Kenneth H.; Iwata, Koji; Wu, Max C.
2001-09-01
In comparison to conventional medical imaging techniques, dual-modality imaging offers the advantage of correlating anatomical information from X-ray computed tomography (CT) with functional measurements from single-photon emission computed tomography (SPECT) or with positron emission tomography (PET). The combined X-ray/radionuclide images from dual-modality imaging can help the clinician to differentiate disease from normal uptake of radiopharmaceuticals, and to improve diagnosis and staging of disease. In addition, phantom and animal studies have demonstrated that a priori structural information from CT can be used to improve quantification of tissue uptake and organ function by correcting the radionuclide data for errors due to photon attenuation, partial volume effects, scatter radiation, and other physical effects. Dual-modality imaging therefore is emerging as a method of improving the visual quality and the quantitative accuracy of radionuclide imaging for diagnosis of patients with cancer and heart disease.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O
2006-03-01
To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.
Density Contrast Sedimentation Velocity for the Determination of Protein Partial-Specific Volumes
Brown, Patrick H.; Balbo, Andrea; Zhao, Huaying; Ebel, Christine; Schuck, Peter
2011-01-01
The partial-specific volume of proteins is an important thermodynamic parameter required for the interpretation of data in several biophysical disciplines. Building on recent advances in the use of density variation sedimentation velocity analytical ultracentrifugation for the determination of macromolecular partial-specific volumes, we have explored a direct global modeling approach describing the sedimentation boundaries in different solvents with a joint differential sedimentation coefficient distribution. This takes full advantage of the influence of different macromolecular buoyancy on both the spread and the velocity of the sedimentation boundary. It should lend itself well to the study of interacting macromolecules and/or heterogeneous samples in microgram quantities. Model applications to three protein samples studied in either H2O, or isotopically enriched H2 18O mixtures, indicate that partial-specific volumes can be determined with a statistical precision of better than 0.5%, provided signal/noise ratios of 50–100 can be achieved in the measurement of the macromolecular sedimentation velocity profiles. The approach is implemented in the global modeling software SEDPHAT. PMID:22028836
Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali
2013-02-01
Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.
Analytical-Based Partial Volume Recovery in Mouse Heart Imaging
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; deKemp, Robert A.
2011-02-01
Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
NASA Astrophysics Data System (ADS)
Monnin, Christophe
1989-06-01
Literature density data for binary and common ion ternary solutions in the Na-K-Ca-Mg-Cl-SO 4-HCO 3-CO3-H 2O system at 25°C have been analysed with Pitzer's ion interaction model, which provides an adequate representation of the experimental data for binary and common ion ternary solutions up to high concentration. This analysis yields Pitzer's interaction parameters for the apparent and partial molal volumes, which are the first derivatives with respect to pressure of the interaction parameters for the free energy. From this information, densities of natural waters as well as partial molal volumes of their solutes can be predicted with good accuracy, as shown by several comparisons of calculated and measured values. It is shown that V¯MX - V¯0mx, the excess partial molal volume of the salt MX, depends more on the type of salt than on the electrolyte itself and that it increases with the charges of the salt components. The influence of concentration and composition on the variation of activity coefficients with pressure and on the partial molal volumes of the salts is discussed, using as an example the partial molal volume of CaSO 4(aq) in solutions of various compositions. The increase of V¯CaSO 4, with ionic strength is very large but is not very different for a NaCl-dominated natural water like the Red Sea lower brine than for a simple NaCl solution. Although the variation of activity coefficients with pressure is usually ignored for moderate pressures, like those found in hydrothermal environments, the present example shows that it can be as large as 30% for a 2-2 salt for a pressure increase from 1 to 500 bars at high ionic strength.
NASA Astrophysics Data System (ADS)
Bijnens, Johan; Rössler, Thomas
2015-11-01
We present a calculation of the finite volume corrections to meson masses and decay constants in three flavour Partially Quenched Chiral Perturbation Theory (PQChPT) through two-loop order in the chiral expansion for the flavour-charged (or off-diagonal) pseudoscalar mesons. The analytical results are obtained for three sea quark flavours with one, two or three different masses. We reproduce the known infinite volume results and the finite volume results in the unquenched case. The calculation has been performed using the supersymmetric formulation of PQChPT as well as with a quark flow technique.
ERIC Educational Resources Information Center
Tan Sisman, Gulcin; Aksu, Meral
2016-01-01
The purpose of the present study was to portray students' misconceptions and errors while solving conceptually and procedurally oriented tasks involving length, area, and volume measurement. The data were collected from 445 sixth grade students attending public primary schools in Ankara, Türkiye via a test composed of 16 constructed-response…
Dilatometric measurement of the partial molar volume of water sorbed to durum wheat flour.
Hasegawa, Ayako; Ogawa, Takenobu; Adachi, Shuji
2013-01-01
Moisture sorption isotherms were measured at 25 °C for untreated, dry-heated and pre-gelatinized durum wheat flour samples. The isotherms could be expressed by the Guggenheim-Anderson-de Boer equation. The amount of water sorbed to the untreated flour was highest for low water activity, with water sorbed to the pre-gelatinized and dry-heated flour samples following. The dry-heated and pregelatinized flour samples exhibited the same dependence of the moisture content on the partial molar volume of water at 25 °C as the untreated flour. The partial molar volume of water was ca. 9 cm(3)/mol at a moisture content of 0.03 kg-H2O/kg-d.m. The volume increased with increasing moisture content, and reached a constant value of ca. 17.5 cm(3)/mol at a moisture content of 0.2 kg-H2O/kg-d.m. or higher.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Errors in finite-difference computations on curvilinear coordinate systems
NASA Technical Reports Server (NTRS)
Mastin, C. W.; Thompson, J. F.
1980-01-01
Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.
The impact of estimation errors on evaluations of timber production opportunities.
Dennis L. Schweitzer
1970-01-01
Errors in estimating costs and return, the timing of harvests, and the cost of using funds can greatly affect the apparent desirability of investments in timber production. Partial derivatives are used to measure the impact of these errors on the predicted present net worth of potential investments in timber production. Graphs that illustrate the impact of each type...
ERIC Educational Resources Information Center
Grunert, Megan L.; Raker, Jeffrey R.; Murphy, Kristen L.; Holme, Thomas A.
2013-01-01
The concept of assigning partial credit on multiple-choice test items is considered for items from ACS Exams. Because the items on these exams, particularly the quantitative items, use common student errors to define incorrect answers, it is possible to assign partial credits to some of these incorrect responses. To do so, however, it becomes…
Security and matching of partial fingerprint recognition systems
NASA Astrophysics Data System (ADS)
Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.
2004-08-01
Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).
Kehl, Sven; Eckert, Sven; Sütterlin, Marc; Neff, K Wolfgang; Siemer, Jörn
2011-06-01
Three-dimensional (3D) sonographic volumetry is established in gynecology and obstetrics. Assessment of the fetal lung volume by magnetic resonance imaging (MRI) in congenital diaphragmatic hernias has become a routine examination. In vitro studies have shown a good correlation between 3D sonographic measurements and MRI. The aim of this study was to compare the lung volumes of healthy fetuses assessed by 3D sonography to MRI measurements and to investigate the impact of different rotation angles. A total of 126 fetuses between 20 and 40 weeks' gestation were measured by 3D sonography, and 27 of them were also assessed by MRI. The sonographic volumes were calculated by the rotational technique (virtual organ computer-aided analysis) with rotation angles of 6° and 30°. To evaluate the accuracy of 3D sonographic volumetry, percentage error and absolute percentage error values were calculated using MRI volumes as reference points. Formulas to calculate total, right, and left fetal lung volumes according to gestational age and biometric parameters were derived by stepwise regression analysis. Three-dimensional sonographic volumetry showed a high correlation compared to MRI (6° angle, R(2) = 0.971; 30° angle, R(2) = 0.917) with no systematic error for the 6° angle. Moreover, using the 6° rotation angle, the median absolute percentage error was significantly lower compared to the 30° angle (P < .001). The new formulas to calculate total lung volume in healthy fetuses only included gestational age and no biometric parameters (R(2) = 0.853). Three-dimensional sonographic volumetry of lung volumes in healthy fetuses showed a good correlation with MRI. We recommend using an angle of 6° because it assessed the lung volume more accurately. The specifically designed equations help estimate lung volumes in healthy fetuses.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
1980-02-01
formula for predictinq the number of errors during system testing. The equation he presents is B V/ ECRIT where B is the number of 19 ’R , errors...expected, V is the volume, and ECRIT is "the mean number of elementary discriminations between potential errors in programming" (p. 85). E CRIT can also...prediction of delivered bugs is: "V VX 2 B = V/ ECRIT -3- 13,824 2.3 McCabe’s Complexity Metric Thomas McCabe (1976) defined complexity in relation to
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio
2011-04-01
To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.
Lee, Chan Ho; Park, Young Joo; Ku, Ja Yoon; Ha, Hong Koo
2017-06-01
To evaluate the clinical application of computed tomography-based measurement of renal cortical volume and split renal volume as a single tool to assess the anatomy and renal function in patients with renal tumors before and after partial nephrectomy, and to compare the findings with technetium-99m dimercaptosuccinic acid renal scan. The data of 51 patients with a unilateral renal tumor managed by partial nephrectomy were retrospectively analyzed. The renal cortical volume of tumor-bearing and contralateral kidneys was measured using ImageJ software. Split estimated glomerular filtration rate and split renal volume calculated using this renal cortical volume were compared with the split renal function measured with technetium-99m dimercaptosuccinic acid renal scan. A strong correlation between split renal function and split renal volume of the tumor-bearing kidney was observed before and after surgery (r = 0.89, P < 0.001 and r = 0.94, P < 0.001). The preoperative and postoperative split estimated glomerular filtration rate of the operated kidney showed a moderate correlation with split renal function (r = 0.39, P = 0.004 and r = 0.49, P < 0.001). The correlation between reductions in split renal function and split renal volume of the operated kidney (r = 0.87, P < 0.001) was stronger than that between split renal function and percent reduction in split estimated glomerular filtration rate (r = 0.64, P < 0.001). The split renal volume calculated using computed tomography-based renal volumetry had a strong correlation with the split renal function measured using technetium-99m dimercaptosuccinic acid renal scan. Computed tomography-based split renal volume measurement before and after partial nephrectomy can be used as a single modality for anatomical and functional assessment of the tumor-bearing kidney. © 2017 The Japanese Urological Association.
Set-up uncertainties: online correction with X-ray volume imaging.
Kataria, Tejinder; Abhishek, Ashu; Chadha, Pranav; Nandigam, Janardhan
2011-01-01
To determine interfractional three-dimensional set-up errors using X-ray volumetric imaging (XVI). Between December 2007 and August 2009, 125 patients were taken up for image-guided radiotherapy using online XVI. After matching of reference and acquired volume view images, set-up errors in three translation directions were recorded and corrected online before treatment each day. Mean displacements, population systematic (Σ), and random (σ) errors were calculated and analyzed using SPSS (v16) software. Optimum clinical target volume (CTV) to planning target volume (PTV) margin was calculated using Van Herk's (2.5Σ + 0.7 σ) and Stroom's (2Σ + 0.7 σ) formula. Patients were grouped in 4 cohorts, namely brain, head and neck, thorax, and abdomen-pelvis. The mean vector displacement recorded were 0.18 cm, 0.15 cm, 0.36 cm, and 0.35 cm for brain, head and neck, thorax, and abdomen-pelvis, respectively. Analysis of individual mean set-up errors revealed good agreement with the proposed 0.3 cm isotropic margins for brain and 0.5 cm isotropic margins for head-neck. Similarly, 0.5 cm circumferential and 1 cm craniocaudal proposed margins were in agreement with thorax and abdomen-pelvic cases. The calculated mean displacements were well within CTV-PTV margin estimates of Van Herk (90% population coverage to minimum 95% prescribed dose) and Stroom (99% target volume coverage by 95% prescribed dose). Employing these individualized margins in a particular cohort ensure comparable target coverage as described in literature, which is further improved if XVI-aided set-up error detection and correction is used before treatment.
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin
2017-04-01
To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.
Study of an instrument for sensing errors in a telescope wavefront
NASA Technical Reports Server (NTRS)
Golden, L. J.; Shack, R. V.; Slater, D. N.
1973-01-01
Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.
Chalian, Hamid; Seyal, Adeel Rahim; Rezai, Pedram; Töre, Hüseyin Gürkan; Miller, Frank H; Bentrem, David J; Yaghmai, Vahid
2014-01-10
The accuracy for determining pancreatic cyst volume with commonly used spherical and ellipsoid methods is unknown. The role of CT volumetry in volumetric assessment of pancreatic cysts needs to be explored. To compare volumes of the pancreatic cysts by CT volumetry, spherical and ellipsoid methods and determine their accuracy by correlating with actual volume as determined by EUS-guided aspiration. Setting This is a retrospective analysis performed at a tertiary care center. Patients Seventy-eight pathologically proven pancreatic cysts evaluated with CT and endoscopic ultrasound (EUS) were included. Design The volume of fourteen cysts that had been fully aspirated by EUS was compared to CT volumetry and the routinely used methods (ellipsoid and spherical volume). Two independent observers measured all cysts using commercially available software to evaluate inter-observer reproducibility for CT volumetry. The volume of pancreatic cysts as determined by various methods was compared using repeated measures analysis of variance. Bland-Altman plot and intraclass correlation coefficient were used to determine mean difference and correlation between observers and methods. The error was calculated as the percentage of the difference between the CT estimated volumes and the aspirated volume divided by the aspirated one. CT volumetry was comparable to aspirated volume (P=0.396) with very high intraclass correlation (r=0.891, P<0.001) and small mean difference (0.22 mL) and error (8.1%). Mean difference with aspirated volume and error were larger for ellipsoid (0.89 mL, 30.4%; P=0.024) and spherical (1.73 mL, 55.5%; P=0.004) volumes than CT volumetry. There was excellent inter-observer correlation in volumetry of the entire cohort (r=0.997, P<0.001). CT volumetry is accurate and reproducible. Ellipsoid and spherical volume overestimate the true volume of pancreatic cysts.
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.
1994-01-01
It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.
Differential phase measurements of D-region partial reflections
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Sechrist, C. F., Jr.
1972-01-01
Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Interaction and Representational Integration: Evidence from Speech Errors
ERIC Educational Resources Information Center
Goldrick, Matthew; Baker, H. Ross; Murphy, Amanda; Baese-Berk, Melissa
2011-01-01
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated…
Establishing the 3-D finite element solid model of femurs in partial by volume rendering.
Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin
2013-01-01
It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Positron beam study of indium tin oxide films on GaN
NASA Astrophysics Data System (ADS)
Cheung, C. K.; Wang, R. X.; Beling, C. D.; Djurisic, A. B.; Fung, S.
2007-02-01
Variable energy Doppler broadening spectroscopy has been used to study open-volume defects formed during the fabrication of indium tin oxide (ITO) thin films grown by electron-beam evaporation on n-GaN. The films were prepared at room temperature, 200 and 300 °C without oxygen and at 200 °C under different oxygen partial pressures. The results show that at elevated growth temperatures the ITO has fewer open volume sites and grows with a more crystalline structure. High temperature growth, however, is not sufficient in itself to remove open volume defects at the ITO/GaN interface. Growth under elevated temperature and under partial pressure of oxygen is found to further reduce the vacancy type defects associated with the ITO film, thus improving the quality of the film. Oxygen partial pressures of 6 × 10-3 mbar and above are found to remove open volume defects associated with the ITO/GaN interface. The study suggests that, irrespective of growth temperature and oxygen partial pressure, there is only one type of defect in the ITO responsible for trapping positrons, which we tentatively attribute to the oxygen vacancy.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan
2009-01-01
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.
Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V
2010-12-15
We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
The international food unit: a new measurement aid that can improve portion size estimation.
Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M
2017-09-12
Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less
A combined analysis of the hadronic and leptonic decays of the Z 0
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gandois, B.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Von Krogh, J.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Lupu, N.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Muller, A.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den Plas, D.; Vandalen, G. J.; Virtue, C. J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.; Zylberajch, S.; OPAL Collaboration
1990-04-01
We report on a measurement of the mass of the Z 0 boson, its total width, and its partial decay widths into hadrons and leptons. On the basis of 25 801 hadronic decays and 1999 decays into electrons, muons or taus, selected over eleven energy points between 88.28 GeV and 95.04 GeV, we obtain from a combined fit to hadrons and leptons a mass of Mz=91.154±0.021 (exp)±0.030 (LEP) GeV, and a total width of Γz=2.536±0.045 GeV. The errors on Mz have been separated into the experimental error and the uncertainty due to the LEP beam energy. The measured leptonic partial widths are Γee=81.2±2.6 MeV, Γμμ=82.6± 5.8 MeV, and Γττ=85.7±7.1 MeV, consistent with lepton universality. From a fit assuming lepton universality we obtain Γℓ + ℓ - = 81.9±2.0 MeV. The hadronic partial width is Γhad=1838±46 MeV. From the measured total and partial widths a model independent value for the invisible width is calculated to be Γinv=453±44 MeV. The errors quoted include both the statistical and the systematic uncertainties.
Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M
2014-01-01
Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.
Kim, Min-Soo; Lee, Jeong-Rim; Shin, Yang-Sik; Chung, Ji-Won; Lee, Kyu-Ho; Ahn, Ki Ryang
2014-03-01
This single-center, prospective, randomized, double-blind, 2-arm, parallel group comparison trial was performed to establish whether the adult-sized laryngeal mask airway (LMA) Classic (The Laryngeal Mask Company Ltd, Henley-on-Thames, UK) could be used safely without any consideration of cuff hyperinflation when a cuff of the LMA Classic was inflated using half the maximum inflation volume or the resting volume before insertion of device. Eighty patients aged 20 to 70 years scheduled for general anesthesia using the LMA Classic were included. Before insertion, the cuff was partially filled with half the maximum inflation volume in the half volume group or the resting volume created by opening the pilot balloon valve to equalize with atmospheric pressure in the resting volume group. Several parameters regarding insertion, intracuff pressure, airway leak pressure, and leakage volume/fraction were collected after LMA insertion. The LMA Classic with a partially inflated cuff was successfully inserted in all enrolled patients. Both groups had the same success rate of 95% at the first insertion attempt. The half volume group had a lower mean intracuff pressure compared with the resting volume group (54.5 ± 16.1 cm H2O vs 61.8 ± 16.1 cm H2O; P = .047). There was no difference in airway leak pressure or leakage volume/fraction between the 2 groups under mechanical ventilation. The partially inflated cuff method using half the maximum recommended inflation volume or the resting volume is feasible with the adult-sized LMA Classic, resulting in a high success rate of insertion and adequate range of intracuff pressures. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
NASA Astrophysics Data System (ADS)
Taniguchi, Y.; Okuno, A.; Kato, M.
2010-03-01
Pressure can retrain the heat-induced aggregation and dissociate the heat-induced aggregates. We observed the aggregation-preventing pressure effect and the aggregates-dissociating pressure effect to characterize the heat-induced aggregation of equine serum albumin (ESA) by FT-IR spectroscopy. The results suggest the α-helical structure collapses at the beginning of heat-induced aggregation through the swollen structure, and then the rearrangement of structure to the intermolecular β-sheet takes place through partially unfolded structure. We determined the activation volume for the heat-induced aggregation (ΔV# = +93 ml/mol) and the partial molar volume difference between native state and heat-induced aggregates (ΔV=+32 ml/mol). This positive partial molar volume difference suggests that the heat-induced aggregates have larger internal voids than the native structure. Moreover, the positive volume change implies that the formation of the intermolecular β-sheet is unfavorable under high pressure.
NASA Astrophysics Data System (ADS)
Sharma, Poonam; Chauhan, S.; Syal, V. K.; Chauhan, M. S.
2008-04-01
Partial molar volumes of the drugs Parvon Spas, Parvon Forte, Tramacip, and Parvodex in aqueous mixtures of methanol (MeOH), ethanol (EtOH), and propan-1-ol (1-PrOH) have been determined. The data have been evaluated using the Masson equation. The parameters, apparent molar volumes {(φ_v)}, partial molar volumes {(φ_v0)}, and S v values (experimental slopes) have been interpreted in terms of solute solvent interactions. In addition, these studies have also been extended to determine the effect of these drugs on the solvation behavior of an electrolyte (sodium chloride), a surfactant (sodium dodecyl sulfate), and a non-electrolyte (sucrose). It can be inferred from these studies that all drug cations can be regarded as structure makers/promoters due to hydrophobic hydration. Furthermore, the results are correlated to understand the solution behavior of drugs in aqueous-alcoholic systems, as a function of the nature of the alcohol and solutes.
Using Dalton's Law of Partial Pressures to Determine the Vapor Pressure of a Volatile Liquid
ERIC Educational Resources Information Center
Hilgeman, Fred R.; Bertrand, Gary; Wilson, Brent
2007-01-01
This experiment, designed for a general chemistry laboratory, illustrates the use of Dalton's law of partial pressures to determine the vapor pressure of a volatile liquid. A predetermined volume of air is injected into a calibrated tube filled with a liquid whose vapor pressure is to be measured. The volume of the liquid displaced is greater than…
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
Hagiwara, A; Hori, M; Yokoyama, K; Takemura, M Y; Andica, C; Kumamaru, K K; Nakazawa, M; Takano, N; Kawasaki, H; Sato, S; Hamasaki, N; Kunimatsu, A; Aoki, S
2017-02-01
T1 and T2 values and proton density can now be quantified on the basis of a single MR acquisition. The myelin and edema in a voxel can also be estimated from these values. The purpose of this study was to evaluate a multiparametric quantitative MR imaging model that assesses myelin and edema for characterizing plaques, periplaque white matter, and normal-appearing white matter in patients with MS. We examined 3T quantitative MR imaging data from 21 patients with MS. The myelin partial volume, excess parenchymal water partial volume, the inverse of T1 and transverse T2 relaxation times (R1, R2), and proton density were compared among plaques, periplaque white matter, and normal-appearing white matter. All metrics differed significantly across the 3 groups ( P < .001). Those in plaques differed most from those in normal-appearing white matter. The percentage changes of the metrics in plaques and periplaque white matter relative to normal-appearing white matter were significantly more different from zero for myelin partial volume (mean, -61.59 ± 20.28% [plaque relative to normal-appearing white matter], and mean, -10.51 ± 11.41% [periplaque white matter relative to normal-appearing white matter]), and excess parenchymal water partial volume (13.82 × 10 3 ± 49.47 × 10 3 % and 51.33 × 10 2 ± 155.31 × 10 2 %) than for R1 (-35.23 ± 13.93% and -6.08 ± 8.66%), R2 (-21.06 ± 11.39% and -4.79 ± 6.79%), and proton density (23.37 ± 10.30% and 3.37 ± 4.24%). Multiparametric quantitative MR imaging captures white matter damage in MS. Myelin partial volume and excess parenchymal water partial volume are more sensitive to the MS disease process than R1, R2, and proton density. © 2017 by American Journal of Neuroradiology.
Lysandropoulos, Andreas P; Absil, Julie; Metens, Thierry; Mavroudakis, Nicolas; Guisset, François; Van Vlierberghe, Eline; Smeets, Dirk; David, Philippe; Maertens, Anke; Van Hecke, Wim
2016-02-01
There is emerging evidence that brain atrophy is a part of the pathophysiology of Multiple Sclerosis (MS) and correlates with several clinical outcomes of the disease, both physical and cognitive. Consequently, brain atrophy is becoming an important parameter in patients' follow-up. Since in clinical practice both 1.5Tesla (T) and 3T magnetic resonance imaging (MRI) systems are used for MS patients follow-up, questions arise regarding compatibility and a possible need for standardization. Therefore, in this study 18 MS patients were scanned on the same day on a 1.5T and a 3T scanner. For each scanner, a 3D T1 and a 3D FLAIR were acquired. As no atrophy is expected within 1 day, these datasets can be used to evaluate the median percentage error of the brain volume measurement for gray matter (GM) volume and parenchymal volume (PV) between 1.5T and 3T scanners. The results are obtained with MSmetrix, which is developed especially for use in the MS clinical care path, and compared to Siena (FSL), a widely used software for research purposes. The MSmetrix median percentage error of the brain volume measurement between a 1.5T and a 3T scanner is 0.52% for GM and 0.35% for PV. For Siena this error equals 2.99%. When data of the same scanner are compared, the error is in the order of 0.06-0.08% for both MSmetrix and Siena. MSmetrix appears robust on both the 1.5T and 3T systems and the measurement error becomes an order of magnitude higher between scanners with different field strength.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balderson, Michael, E-mail: michael.balderson@rmp.uhn.ca; Brown, Derek; Johnson, Patricia
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic–based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for themore » different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15 mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT.« less
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
2011-01-01
Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS) model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV) loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves. PMID:21942971
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Ye, Min; Nagar, Swati; Korzekwa, Ken
2015-01-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057
Bulemela, E; Tremaine, Peter R
2008-05-08
Apparent molar volumes of dilute aqueous solutions of monoethanolamine (MEA), diethanolamine (DEA), triethanolamine (TEA), N,N-dimethylethanolamine (DMEA), ethylethanolamine (EAE), 2-diethylethanolamine (2-DEEA), and 3-methoxypropylamine (3-MPA) and their salts were measured at temperatures from 150 to 325 degrees C and pressures as high as 15 MPa. The results were corrected for the ionization and used to obtain the standard partial molar volumes, Vo2. A three-parameter equation of state was used to describe the temperature and pressure dependence of the standard partial molar volumes. The fitting parameters were successfully divided into functional group contributions at all temperatures to obtain the standard partial molar volume contributions. Including literature results for alcohols, carboxylic acids, and hydroxycarboxylic acids yielded the standard partial molar volume contributions of the functional groups >CH-, >CH2, -CH3, -OH, -COOH, -O-, -->N, >NH, -NH2, -COO-Na+, -NH3+Cl-, >NH2+Cl-, and -->NH+Cl- over the range (150 degrees C
Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles
2016-01-01
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Phillips, J
2016-06-15
Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less
Micro CT based truth estimation of nodule volume
NASA Astrophysics Data System (ADS)
Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.
2010-03-01
With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
CT volumetry of the skeletal tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brindle, James M.; Alexandre Trindade, A.; Pichardo, Jose C.
2006-10-15
Computed tomography (CT) is an important and widely used modality in the diagnosis and treatment of various cancers. In the field of molecular radiotherapy, the use of spongiosa volume (combined tissues of the bone marrow and bone trabeculae) has been suggested as a means to improve the patient-specificity of bone marrow dose estimates. The noninvasive estimation of an organ volume comes with some degree of error or variation from the true organ volume. The present study explores the ability to obtain estimates of spongiosa volume or its surrogate via manual image segmentation. The variation among different segmentation raters was exploredmore » and found not to be statistically significant (p value >0.05). Accuracy was assessed by having several raters manually segment a polyvinyl chloride (PVC) pipe with known volumes. Segmentation of the outer region of the PVC pipe resulted in mean percent errors as great as 15% while segmentation of the pipe's inner region resulted in mean percent errors within {approx}5%. Differences between volumes estimated with the high-resolution CT data set (typical of ex vivo skeletal scans) and the low-resolution CT data set (typical of in vivo skeletal scans) were also explored using both patient CT images and a PVC pipe phantom. While a statistically significant difference (p value <0.002) between the high-resolution and low-resolution data sets was observed with excised femoral heads obtained following total hip arthroplasty, the mean difference between high-resolution and low-resolution data sets was found to be only 1.24 and 2.18 cm{sup 3} for spongiosa and cortical bone, respectively. With respect to differences observed with the PVC pipe, the variation between the high-resolution and low-resolution mean percent errors was a high as {approx}20% for the outer region volume estimates and only as high as {approx}6% for the inner region volume estimates. The findings from this study suggest that manual segmentation is a reasonably accurate and reliable means for the in vivo estimation of spongiosa volume. This work also provides a foundation for future studies where spongiosa volumes are estimated by various raters in more comprehensive CT data sets.« less
Quantification of tumor mobility during the breathing cycle using 3D dynamic MRI
NASA Astrophysics Data System (ADS)
Schoebinger, Max; Plathow, Christian; Wolf, Ivo; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter
2006-03-01
Respiration causes movement and shape changes in thoracic tumors, which has a direct influence on the radio-therapy planning process. Current methods for the estimation of tumor mobility are either two-dimensional (fluoroscopy, 2D dynamic MRI) or based on radiation (3D (+t) CT, implanted gold markers). With current advances in dynamic MRI acquisition, 3D+t image sequences of the thorax can be acquired covering the thorax over the whole breathing cycle. In this work, methods are presented for the interactive segmentation of tumors in dynamic images, the calculation of tumor trajectories, dynamic tumor volumetry and dynamic tumor rotation/deformation based on 3D dynamic MRI. For volumetry calculation, a set of 21 related partial volume correcting volumetry algorithms has been evaluated based on tumor surrogates. Conventional volumetry based on voxel counting yielded a root mean square error of 29% compared to a root mean square error of 11% achieved by the algorithm performing best among the different volumetry methods. The new workflow has been applied to a set of 26 patients. Preliminary results indicate, that 3D dynamic MRI reveals important aspects of tumor behavior during the breathing cycle. This might imply the possibility to further improve high-precision radiotherapy techniques.
Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror
NASA Astrophysics Data System (ADS)
Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong
2018-06-01
Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Mato Abad, Virginia; Quirós, Alicia; García-Álvarez, Roberto; Loureiro, Javier Pereira; Alvarez-Linera, Juan; Frank, Ana; Hernández-Tamames, Juan Antonio
2014-01-01
1H-MRS variability increases due to normal aging and also as a result of atrophy in grey and white matter caused by neurodegeneration. In this work, an automatic process was developed to integrate data from spectra and high-resolution anatomical images to quantify metabolites, taking into account tissue partial volumes within the voxel of interest avoiding additional spectra acquisitions required for partial volume correction. To evaluate this method, we use a cohort of 135 subjects (47 male and 88 female, aged between 57 and 99 years) classified into 4 groups: 38 healthy participants, 20 amnesic mild cognitive impairment patients, 22 multi-domain mild cognitive impairment patients, and 55 Alzheimer's disease patients. Our findings suggest that knowing the voxel composition of white and grey matter and cerebrospinal fluid is necessary to avoid partial volume variations in a single-voxel study and to decrease part of the variability found in metabolites quantification, particularly in those studies involving elder patients and neurodegenerative diseases. The proposed method facilitates the use of 1H-MRS techniques in statistical studies in Alzheimer's disease, because it provides more accurate quantitative measurements, reduces the inter-subject variability, and improves statistical results when performing group comparisons.
Dukart, Juergen; Bertolino, Alessandro
2014-01-01
Both functional and also more recently resting state magnetic resonance imaging have become established tools to investigate functional brain networks. Most studies use these tools to compare different populations without controlling for potential differences in underlying brain structure which might affect the functional measurements of interest. Here, we adapt a simulation approach combined with evaluation of real resting state magnetic resonance imaging data to investigate the potential impact of partial volume effects on established functional and resting state magnetic resonance imaging analyses. We demonstrate that differences in the underlying structure lead to a significant increase in detected functional differences in both types of analyses. Largest increases in functional differences are observed for highest signal-to-noise ratios and when signal with the lowest amount of partial volume effects is compared to any other partial volume effect constellation. In real data, structural information explains about 25% of within-subject variance observed in degree centrality--an established resting state connectivity measurement. Controlling this measurement for structural information can substantially alter correlational maps obtained in group analyses. Our results question current approaches of evaluating these measurements in diseased population with known structural changes without controlling for potential differences in these measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majewski, Wojciech, E-mail: wmajewski1@poczta.onet.p; Wesolowska, Iwona; Urbanczyk, Hubert
2009-12-01
Purpose: To estimate bladder movements and changes in dose distribution in the bladder and surrounding tissues associated with changes in bladder filling and to estimate the internal treatment margins. Methods and Materials: A total of 16 patients with bladder cancer underwent planning computed tomography scans with 80- and 150-mL bladder volumes. The bladder displacements associated with the change in volume were measured. Each patient had treatment plans constructed for a 'partially empty' (80 mL) and a 'partially full' (150 mL) bladder. An additional plan was constructed for tumor irradiation alone. A subsequent 9 patients underwent sequential weekly computed tomography scanningmore » during radiotherapy to verify the bladder movements and estimate the internal margins. Results: Bladder movements were mainly observed cranially, and the estimated internal margins were nonuniform and largest (>2 cm) anteriorly and cranially. The dose distribution in the bladder worsened if the bladder increased in volume: 70% of patients (11 of 16) would have had bladder underdosed to <95% of the prescribed dose. The dose distribution in the rectum and intestines was better with a 'partially empty' bladder (volume that received >70%, 80%, and 90% of the prescribed dose was 23%, 20%, and 15% for the rectum and 162, 144, 123 cm{sup 3} for the intestines, respectively) than with a 'partially full' bladder (volume that received >70%, 80%, and 90% of the prescribed dose was 28%, 24%, and 18% for the rectum and 180, 158, 136 cm{sup 3} for the intestines, respectively). The change in bladder filling during RT was significant for the dose distribution in the intestines. Tumor irradiation alone was significantly better than whole bladder irradiation in terms of organ sparing. Conclusion: The displacements of the bladder due to volume changes were mainly related to the upper wall. The internal margins should be nonuniform, with the largest margins cranially and anteriorly. The changes in bladder filling during RT could influence the dose distribution in the bladder and intestines. The dose distribution in the rectum and bowel was slightly better with a 'partially empty' than with a 'full' bladder.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briscoe, M; Ploquin, N; Voroney, JP
2015-06-15
Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less
Onorbit IMU alignment error budget
NASA Technical Reports Server (NTRS)
Corson, R. W.
1980-01-01
The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
Minimizing finite-volume discretization errors on polyhedral meshes
NASA Astrophysics Data System (ADS)
Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian
2017-11-01
Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.
Roshani, G H; Karami, A; Salehizadeh, A; Nazemi, E
2017-11-01
The problem of how to precisely measure the volume fractions of oil-gas-water mixtures in a pipeline remains as one of the main challenges in the petroleum industry. This paper reports the capability of Radial Basis Function (RBF) in forecasting the volume fractions in a gas-oil-water multiphase system. Indeed, in the present research, the volume fractions in the annular three-phase flow are measured based on a dual energy metering system including the 152 Eu and 137 Cs and one NaI detector, and then modeled by a RBF model. Since the summation of volume fractions are constant (equal to 100%), therefore it is enough for the RBF model to forecast only two volume fractions. In this investigation, three RBF models are employed. The first model is used to forecast the oil and water volume fractions. The next one is utilized to forecast the water and gas volume fractions, and the last one to forecast the gas and oil volume fractions. In the next stage, the numerical data obtained from MCNP-X code must be introduced to the RBF models. Then, the average errors of these three models are calculated and compared. The model which has the least error is picked up as the best predictive model. Based on the results, the best RBF model, forecasts the oil and water volume fractions with the mean relative error of less than 0.5%, which indicates that the RBF model introduced in this study ensures an effective enough mechanism to forecast the results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Assessment of volume and leak measurements during CPAP using a neonatal lung model.
Fischer, H S; Roehr, C C; Proquitté, H; Wauer, R R; Schmalisch, G
2008-01-01
Although several commercial devices are available which allow tidal volume and air leak monitoring during continuous positive airway pressure (CPAP) in neonates, little is known about their measurement accuracy and about the influence of air leaks on volume measurement. The aim of this in vitro study was the validation of volume and leak measurement under CPAP using a commercial ventilatory device, taking into consideration the clinical conditions in neonatology. The measurement accuracy of the Leoni ventilator (Heinen & Löwenstein, Germany) was investigated both in a leak-free system and with leaks simulated using calibration syringes (2-10 ml, 20-100 ml) and a mechanical lung model. Open tubes of variable lengths were connected for leak simulation. Leak flow was measured with the flow-through technique. In a leak-free system the mean relative volume error +/-SD was 3.5 +/- 2.6% (2-10 ml) and 5.9 +/- 0.7% (20-60 ml), respectively. The influence of CPAP level, driving flow, respiratory rate and humidification of the breathing gas on the volume error was negligible. However, an increasing F(i)O(2) caused the measured tidal volume to increase by up to 25% (F(i)O(2) = 1.0). The relative error +/- SD of the leak measurements was -0.2 +/- 11.9%. For leaks > 19%, measured tidal volume was underestimated by more than 10%. In conclusion, the present in vitro study showed that the Leoni allowed accurate volume monitoring under CPAP conditions similar to neonates. Air leaks of up to 90% of patient flow were reliably detected. For an F(i)O(2) > 0.4 and for leaks > 19%, a numerical correction of the displayed volume should be performed.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods
2016-11-16
determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a suitable...pro- ceeds from the determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a...design of statistical estimators (i.e. sensors) as their respective inverses act as lower bounds to the (co)variances of the subject estimator, a property
ERIC Educational Resources Information Center
Gundersen, Craig; Kreider, Brent
2008-01-01
Policymakers have been puzzled to observe that food stamp households appear more likely to be food insecure than observationally similar eligible nonparticipating households. We reexamine this issue allowing for nonclassical reporting errors in food stamp participation and food insecurity. Extending the literature on partially identified…
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
ERIC Educational Resources Information Center
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Ye, Min; Nagar, Swati; Korzekwa, Ken
2016-04-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
Hess, Glen W.
2002-01-01
Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
A Bibliography of Recreational Mathematics, Volume 1. Fourth Edition.
ERIC Educational Resources Information Center
Schaaf, William L.
This book is a partially annotated bibliography of books, articles, and periodicals concerned with mathematical games, puzzles, and amusements. It is a reprinting of Volume 1 of a three-volume series. This volume, originally published in 1955, treats problems and recreations which have been important in the history of mathematics as well as some…
Trends in refractive surgery at an academic center: 2007-2009.
Kuo, Irene C
2011-05-14
The United States officially entered a recession in December 2007, and it officially exited the recession in December 2009, according to the National Bureau of Economic Research. Since the economy may affect not only the volume of excimer laser refractive surgery, but also the clinical characteristics of patients undergoing surgery, our goal was to compare the characteristics of patients completing excimer laser refractive surgery and the types of procedures performed in the summer quarter in 2007 and the same quarter in 2009 at an academic center. A secondary goal was to determine whether the volume of astigmatism- or presbyopia-correcting intraocular lenses (IOLs) has concurrently changed because like laser refractive surgery, these "premium" IOLs involve out-of-pocket costs for patients. Retrospective case series. Medical records were reviewed for all patients completing surgery at the Wilmer Laser Vision Center in the summer quarter of 2007 and the summer quarter of 2009. Outcome measures were the proportions of treated refractive errors, the proportion of photorefractive keratectomy (PRK) vs. laser-assisted in-situ keratomileusis (LASIK), and the mean age of patients in each quarter. Chi-square test was used to compare the proportions of treated refractive errors and the proportions of procedures; two-tailed t-test to compare the mean age of patients; and two-tailed z-test to compare proportions of grouped refractive errors in 2007 vs. 2009; alpha = 0.05 for all tests. Refractive errors were grouped by the spherical equivalent of the manifest refraction and were considered "low myopia" for 6 diopters (D) of myopia or less, "high myopia" for more than 6 D, and "hyperopia" for any hyperopia. Billing data were reviewed to obtain the volume of premium IOLs. Volume of laser refractive procedures decreased by at least 30%. The distribution of proportions of treated refractive errors did not change (p = 0.10). The proportion of high myopes, however, decreased (p = 0.05). The proportions of types of procedure changed, with an increase in the proportion of PRK between 2007 and 2009 (p = 0.02). The mean age of patients did not change [42.4 ± 14.4 (standard deviation) years in 2007 vs. 39.6 ± 14.5 years in 2009; p = 0.4]. Astigmatism-correcting IOL and presbyopia-correcting IOL volumes increased 15-fold and three-fold, respectively, between 2007 and 2009. Volume of excimer laser refractive surgery decreased by at least 30% between 2007 and 2009. No significant change in mean age or in the distribution of refractive error was seen, although the proportion of high myopes decreased between summer quarters of 2007 and 2009. PRK gained as a proportion of total cases. Premium IOL volume increased, but still comprised a very small proportion of total IOL volume.
Trends in refractive surgery at an academic center: 2007-2009
2011-01-01
Background The United States officially entered a recession in December 2007, and it officially exited the recession in December 2009, according to the National Bureau of Economic Research. Since the economy may affect not only the volume of excimer laser refractive surgery, but also the clinical characteristics of patients undergoing surgery, our goal was to compare the characteristics of patients completing excimer laser refractive surgery and the types of procedures performed in the summer quarter in 2007 and the same quarter in 2009 at an academic center. A secondary goal was to determine whether the volume of astigmatism- or presbyopia-correcting intraocular lenses (IOLs) has concurrently changed because like laser refractive surgery, these "premium" IOLs involve out-of-pocket costs for patients. Methods Retrospective case series. Medical records were reviewed for all patients completing surgery at the Wilmer Laser Vision Center in the summer quarter of 2007 and the summer quarter of 2009. Outcome measures were the proportions of treated refractive errors, the proportion of photorefractive keratectomy (PRK) vs. laser-assisted in-situ keratomileusis (LASIK), and the mean age of patients in each quarter. Chi-square test was used to compare the proportions of treated refractive errors and the proportions of procedures; two-tailed t-test to compare the mean age of patients; and two-tailed z-test to compare proportions of grouped refractive errors in 2007 vs. 2009; alpha = 0.05 for all tests. Refractive errors were grouped by the spherical equivalent of the manifest refraction and were considered "low myopia" for 6 diopters (D) of myopia or less, "high myopia" for more than 6 D, and "hyperopia" for any hyperopia. Billing data were reviewed to obtain the volume of premium IOLs. Results Volume of laser refractive procedures decreased by at least 30%. The distribution of proportions of treated refractive errors did not change (p = 0.10). The proportion of high myopes, however, decreased (p = 0.05). The proportions of types of procedure changed, with an increase in the proportion of PRK between 2007 and 2009 (p = 0.02). The mean age of patients did not change [42.4 ± 14.4 (standard deviation) years in 2007 vs. 39.6 ± 14.5 years in 2009; p = 0.4]. Astigmatism-correcting IOL and presbyopia-correcting IOL volumes increased 15-fold and three-fold, respectively, between 2007 and 2009. Conclusions Volume of excimer laser refractive surgery decreased by at least 30% between 2007 and 2009. No significant change in mean age or in the distribution of refractive error was seen, although the proportion of high myopes decreased between summer quarters of 2007 and 2009. PRK gained as a proportion of total cases. Premium IOL volume increased, but still comprised a very small proportion of total IOL volume. PMID:21569564
Limitations of the planning organ at risk volume (PRV) concept.
Stroom, Joep C; Heijmen, Ben J M
2006-09-01
Previously, we determined a planning target volume (PTV) margin recipe for geometrical errors in radiotherapy equal to M(T) = 2 Sigma + 0.7 sigma, with Sigma and sigma standard deviations describing systematic and random errors, respectively. In this paper, we investigated margins for organs at risk (OAR), yielding the so-called planning organ at risk volume (PRV). For critical organs with a maximum dose (D(max)) constraint, we calculated margins such that D(max) in the PRV is equal to the motion averaged D(max) in the (moving) clinical target volume (CTV). We studied margins for the spinal cord in 10 head-and-neck cases and 10 lung cases, each with two different clinical plans. For critical organs with a dose-volume constraint, we also investigated whether a margin recipe was feasible. For the 20 spinal cords considered, the average margin recipe found was: M(R) = 1.6 Sigma + 0.2 sigma with variations for systematic and random errors of 1.2 Sigma to 1.8 Sigma and -0.2 sigma to 0.6 sigma, respectively. The variations were due to differences in shape and position of the dose distributions with respect to the cords. The recipe also depended significantly on the volume definition of D(max). For critical organs with a dose-volume constraint, the PRV concept appears even less useful because a margin around, e.g., the rectum changes the volume in such a manner that dose-volume constraints stop making sense. The concept of PRV for planning of radiotherapy is of limited use. Therefore, alternative ways should be developed to include geometric uncertainties of OARs in radiotherapy planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Jeff, E-mail: jmeye3@utsouthwestern.ed; Bluett, Jaques; Amos, Richard
Purpose: Conventional proton therapy with passively scattered beams is used to treat a number of tumor sites, including prostate cancer. Spot scanning proton therapy is a treatment delivery means that improves conformal coverage of the clinical target volume (CTV). Placement of individual spots within a target is dependent on traversed tissue density. Errors in patient alignment perturb dose distributions. Moreover, there is a need for a rational planning approach that can mitigate the dosimetric effect of random alignment errors. We propose a treatment planning approach and then analyze the consequences of various simulated alignment errors on prostate treatments. Methods andmore » Materials: Ten control patients with localized prostate cancer underwent treatment planning for spot scanning proton therapy. After delineation of the clinical target volume, a scanning target volume (STV) was created to guide dose coverage. Errors in patient alignment in two axes (rotational and yaw) as well as translational errors in the anteroposterior direction were then simulated, and dose to the CTV and normal tissues were reanalyzed. Results: Coverage of the CTV remained high even in the setting of extreme rotational and yaw misalignments. Changes in the rectum and bladder V45 and V70 were similarly minimal, except in the case of translational errors, where, as a result of opposed lateral beam arrangements, much larger dosimetric perturbations were observed. Conclusions: The concept of the STV as applied to spot scanning radiation therapy and as presented in this report leads to robust coverage of the CTV even in the setting of extreme patient misalignments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, W; Yang, H; Wang, Y
2014-06-01
Purpose: To investigate the impact of different clipbox volumes with automated registration techniques using commercially available software with on board volumetric imaging(OBI) for treatment verification in cervical cancer patients. Methods: Fifty cervical cancer patients received daily CBCT scans(on-board imaging v1.5 system, Varian Medical Systems) during the first treatment week and weekly thereafter were included this analysis. A total of 450 CBCT scans were registered to the planning CTscan using pelvic clipbox(clipbox-Pelvic) and around PTV clip box(clipbox- PTV). The translations(anterior-posterior, left-right, superior-inferior) and the rotations(yaw, pitch and roll) errors for each matches were recorded. The setup errors and the systematic andmore » random errors for both of the clip-boxes were calculated. Paired Samples t test was used to analysis the differences between clipbox-Pelvic and clipbox-PTV. Results: . The SD of systematic error(σ) was 1.0mm, 2.0mm,3.2mm and 1.9mm,2.3mm, 3.0mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. The average random error(Σ)was 1.7mm, 2.0mm,4.2mm and 1.7mm,3.4mm, 4.4mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. But, only the SI direction was acquired significantly differences between two image registration volumes(p=0.002,p=0.01 for mean and SD). For rotations, the yaw mean/SD and the pitch SD were acquired significantly differences between clipbox-Pelvic and clipbox-PTV. Conclusion: The defined volume for Image registration is important for cervical cancer when 3D/3D match was used. The alignment clipbox can effect the setup errors obtained. Further analysis is need to determine the optimal defined volume to use the image registration in cervical cancer. Conflict of interest: none.« less
Niccum, Brittany A; Lee, Heewook; MohammedIsmail, Wazim; Tang, Haixu; Foster, Patricia L
2018-06-15
When the DNA polymerase that replicates the Escherichia coli chromosome, DNA Pol III, makes an error, there are two primary defenses against mutation: proofreading by the epsilon subunit of the holoenzyme and mismatch repair. In proofreading deficient strains, mismatch repair is partially saturated and the cell's response to DNA damage, the SOS response, may be partially induced. To investigate the nature of replication errors, we used mutation accumulation experiments and whole genome sequencing to determine mutation rates and mutational spectra across the entire chromosome of strains deficient in proofreading, mismatch repair, and the SOS response. We report that a proofreading-deficient strain has a mutation rate 4,000-fold greater than wild-type strains. While the SOS response may be induced in these cells, it does not contribute to the mutational load. Inactivating mismatch repair in a proofreading-deficient strain increases the mutation rate another 1.5-fold. DNA polymerase has a bias for converting G:C to A:T base pairs, but proofreading reduces the impact of these mutations, helping to maintain the genomic G:C content. These findings give an unprecedented view of how polymerase and error-correction pathways work together to maintain E. coli' s low mutation rate of 1 per thousand generations. Copyright © 2018, Genetics.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L
2016-04-01
Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.
An analysis of the nucleon spectrum from lattice partially-quenched QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Armour; Allton, C. R.; Leinweber, Derek B.
2010-09-01
The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less
Transmitted wavefront error of a volume phase holographic grating at cryogenic temperature.
Lee, David; Taylor, Gordon D; Baillie, Thomas E C; Montgomery, David
2012-06-01
This paper describes the results of transmitted wavefront error (WFE) measurements on a volume phase holographic (VPH) grating operating at a temperature of 120 K. The VPH grating was mounted in a cryogenically compatible optical mount and tested in situ in a cryostat. The nominal root mean square (RMS) wavefront error at room temperature was 19 nm measured over a 50 mm diameter test aperture. The WFE remained at 18 nm RMS when the grating was cooled. This important result demonstrates that excellent WFE performance can be obtained with cooled VPH gratings, as required for use in future cryogenic infrared astronomical spectrometers planned for the European Extremely Large Telescope.
Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.
ERIC Educational Resources Information Center
Levine, Timothy R.; Hullett, Craig R.
2002-01-01
Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
... submitting comments. E-mail: epa.gov ">[email protected] epa.gov . Fax: (202) 566-9744. Mail: Attention Docket..., Room 3334, Washington, DC 20004, Attention Docket ID No. EPA-HQ-OAR-2010-1033. Such deliveries are only..., EPA West Building, Room 3334, 1301 Constitution Ave., NW., Washington, DC. The Public Reading Room is...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobb, Eric, E-mail: eclobb2@gmail.com
2014-04-01
The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Kim, Hyun Keol; Montejo, Ludguier D; Jia, Jingfei; Hielscher, Andreas H
2017-06-01
We introduce here the finite volume formulation of the frequency-domain simplified spherical harmonics model with n -th order absorption coefficients (FD-SP N ) that approximates the frequency-domain equation of radiative transfer (FD-ERT). We then present the FD-SP N based reconstruction algorithm that recovers absorption and scattering coefficients in biological tissue. The FD-SP N model with 3 rd order absorption coefficient (i.e., FD-SP 3 ) is used as a forward model to solve the inverse problem. The FD-SP 3 is discretized with a node-centered finite volume scheme and solved with a restarted generalized minimum residual (GMRES) algorithm. The absorption and scattering coefficients are retrieved using a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Finally, the forward and inverse algorithms are evaluated using numerical phantoms with optical properties and size that mimic small-volume tissue such as finger joints and small animals. The forward results show that the FD-SP 3 model approximates the FD-ERT (S 12 ) solution within relatively high accuracy; the average error in the phase (<3.7%) and the amplitude (<7.1%) of the partial current at the boundary are reported. From the inverse results we find that the absorption and scattering coefficient maps are more accurately reconstructed with the SP 3 model than those with the SP 1 model. Therefore, this work shows that the FD-SP 3 is an efficient model for optical tomographic imaging of small-volume media with non-diffuse properties both in terms of computational time and accuracy as it requires significantly lower CPU time than the FD-ERT (S 12 ) and also it is more accurate than the FD-SP 1 .
NASA Technical Reports Server (NTRS)
Andersen, K. E.
1982-01-01
The format of high density tapes which contain partially processed LANDSAT 4 and LANDSAT D prime MSS image data is defined. This format is based on and is compatible with the existing format for partially processed LANDSAT 3 MSS image data HDTs.
Optimising 4-D surface change detection: an approach for capturing rockfall magnitude-frequency
NASA Astrophysics Data System (ADS)
Williams, Jack G.; Rosser, Nick J.; Hardy, Richard J.; Brain, Matthew J.; Afana, Ashraf A.
2018-02-01
We present a monitoring technique tailored to analysing change from near-continuously collected, high-resolution 3-D data. Our aim is to fully characterise geomorphological change typified by an event magnitude-frequency relationship that adheres to an inverse power law or similar. While recent advances in monitoring have enabled changes in volume across more than 7 orders of magnitude to be captured, event frequency is commonly assumed to be interchangeable with the time-averaged event numbers between successive surveys. Where events coincide, or coalesce, or where the mechanisms driving change are not spatially independent, apparent event frequency must be partially determined by survey interval.The data reported have been obtained from a permanently installed terrestrial laser scanner, which permits an increased frequency of surveys. Surveying from a single position raises challenges, given the single viewpoint onto a complex surface and the need for computational efficiency associated with handling a large time series of 3-D data. A workflow is presented that optimises the detection of change by filtering and aligning scans to improve repeatability. An adaptation of the M3C2 algorithm is used to detect 3-D change to overcome data inconsistencies between scans. Individual rockfall geometries are then extracted and the associated volumetric errors modelled. The utility of this approach is demonstrated using a dataset of ˜ 9 × 103 surveys acquired at ˜ 1 h intervals over 10 months. The magnitude-frequency distribution of rockfall volumes generated is shown to be sensitive to monitoring frequency. Using a 1 h interval between surveys, rather than 30 days, the volume contribution from small (< 0.1 m3) rockfalls increases from 67 to 98 % of the total, and the number of individual rockfalls observed increases by over 3 orders of magnitude. High-frequency monitoring therefore holds considerable implications for magnitude-frequency derivatives, such as hazard return intervals and erosion rates. As such, while high-frequency monitoring has potential to describe short-term controls on geomorphological change and more realistic magnitude-frequency relationships, the assessment of longer-term erosion rates may be more suited to less-frequent data collection with lower accumulative errors.
Errors in fluid therapy in medical wards.
Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin
2012-04-01
Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.
Yu, Isseki; Takayanagi, Masayoshi; Nagaoka, Masataka
2009-03-19
The partial molar volume (PMV) of the protein chymotrypsin inhibitor 2 (CI2) was calculated by all-atom MD simulation. Denatured CI2 showed almost the same average PMV value as that of native CI2. This is consistent with the phenomenological question of the protein volume paradox. Furthermore, using the surficial Kirkwood-Buff approach, spatial distributions of PMV were analyzed as a function of the distance from the CI2 surface. The profiles of the new R-dependent PMV indicate that, in denatured CI2, the reduction in the solvent electrostatic interaction volume is canceled out mainly by an increment in thermal volume in the vicinity of its surface. In addition, the PMV of the denatured CI2 was found to increase in the region in which the number density of water atoms is minimum. These results provide a direct and detailed picture of the mechanism of the protein volume paradox suggested by Chalikian et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Astrophysics Data System (ADS)
Guo, X.; Lange, R. A.; Ai, Y.
2010-12-01
FeO is an important component in magmatic liquids and yet its partial molar volume at one bar is not as well known as that for Fe2O3 because of the difficulty of performing double-bob density measurements under reducing conditions. Moreover, there is growing evidence from spectroscopic studies that Fe2+ occurs in 4, 5, and 6-fold coordination in silicate melts, and it is expected that the partial molar volume and compressibility of the FeO component will vary accordingly. We have conducted both density and relaxed sound speed measurements on four liquids in the An-Di-Hd (CaAl2Si2O8-CaMgSi2O6-CaFeSi2O6) system: (1) Di-Hd (50:50), (2) An-Hd (50:50), (3) An-Di-Hd (33:33:33) and (4) Hd (100). Densities were measured between 1573 and 1838 K at one bar with the double-bob Archimedean method using molybdenum bobs and crucibles in a reducing gas (1%CO-99%Ar) environment. The sound speeds were measured under similar conditions with a frequency-sweep acoustic interferometer, and used to calculate isothermal compressibility. All the density data for the three multi-component (model basalt) liquids were combined with density data on SiO2-Al2O3-CaO-MgO-K2O-Na2O liquids (Lange, 1997) in a fit to a linear volume equation; the results lead to a partial molar volume (±1σ) for FeO =11.7 ± 0.3(±1σ) cm3/mol at 1723 K. This value is similar to that for crystalline FeO at 298 K (halite structure; 12.06 cm3/mol), which suggests an average Fe2+ coordination of ~6 in these model basalt compositions. In contrast, the fitted partial molar volume of FeO in pure hedenbergite liquid is 14.6 ± 0.3 at 1723 K, which is consistent with an average Fe2+ coordination of 4.3 derived from EXAFS spectroscopy (Rossano, 2000). Similarly, all the compressibility data for the three multi-component liquids were combined with compressibility data on SiO2-Al2O3-CaO-MgO liquids (Ai and Lange, 2008) in a fit to an ideal mixing model for melt compressibility; the results lead to a partial molar compressibility (±1σ) for FeO = 2.4 (± 0.3) 10-2 GPa-1 at 1723 K. In contrast, the compressibility of FeO in pure hedenbergite liquid is more than twice as large: 6.3 (± 0.2) 10-2 GPa-1. When these results are combined with density and sound speed data on CaO-FeO-SiO2 liquids at one bar (Guo et al., 2009), a systematic and linear variation between the partial molar volume and compressibility of the FeO component is obtained, which appears to track changes in the average Fe2+ coordination in these liquids. Therefore, the three most important conclusions of this study are: (1) ideal mixing of volume and compressibility does not occur for all FeO-bearing magmatic liquids, owing to changes in Fe2+ coordination, (2) the partial molar volume and compressibility of FeO varies linearly and systematically with Fe2+ coordination, and (3) ideal mixing of volume and compressibility does occur among the three mixed An-Di-Hd liquids, presumably because of a common, average Fe2+ coordination of ~6.
The role of gray and white matter segmentation in quantitative proton MR spectroscopic imaging.
Tal, Assaf; Kirov, Ivan I; Grossman, Robert I; Gonen, Oded
2012-12-01
Since the brain's gray matter (GM) and white matter (WM) metabolite concentrations differ, their partial volumes can vary the voxel's ¹H MR spectroscopy (¹H-MRS) signal, reducing sensitivity to changes. While single-voxel ¹H-MRS cannot differentiate between WM and GM signals, partial volume correction is feasible by MR spectroscopic imaging (MRSI) using segmentation of the MRI acquired for VOI placement. To determine the magnitude of this effect on metabolic quantification, we segmented a 1-mm³ resolution MRI into GM, WM and CSF masks that were co-registered with the MRSI grid to yield their partial volumes in approximately every 1 cm³ spectroscopic voxel. Each voxel then provided one equation with two unknowns: its i- metabolite's GM and WM concentrations C(i) (GM) , C(i) (WM) . With the voxels' GM and WM volumes as independent coefficients, the over-determined system of equations was solved for the global averaged C(i) (GM) and C(i) (WM) . Trading off local concentration differences offers three advantages: (i) higher sensitivity due to combined data from many voxels; (ii) improved specificity to WM versus GM changes; and (iii) reduced susceptibility to partial volume effects. These improvements made no additional demands on the protocol, measurement time or hardware. Applying this approach to 18 volunteered 3D MRSI sets of 480 voxels each yielded N-acetylaspartate, creatine, choline and myo-inositol C(i) (GM) concentrations of 8.5 ± 0.7, 6.9 ± 0.6, 1.2 ± 0.2, 5.3 ± 0.6 mM, respectively, and C(i) (WM) concentrations of 7.7 ± 0.6, 4.9 ± 0.5, 1.4 ± 0.1 and 4.4 ± 0.6mM, respectively. We showed that unaccounted voxel WM or GM partial volume can vary absolute quantification by 5-10% (more for ratios), which can often double the sample size required to establish statistical significance. Copyright © 2012 John Wiley & Sons, Ltd.
Estimating the densities of benzene-derived explosives using atomic volumes.
Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka
2018-02-09
The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís
2015-07-15
Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation process challenges this longstanding assumption. In the present work, we use partial errors to better constrain sequential sampling models at the single-trial level. Copyright © 2015 the authors 0270-6474/15/3510371-15$15.00/0.
Changes of pituitary gland volume in Kennedy disease.
Pieper, C C; Teismann, I K; Konrad, C; Heindel, W L; Schiffbauer, H
2013-12-01
Kennedy disease is a rare X-linked neurodegenerative disorder caused by a CAG repeat expansion in the first exon of the androgen-receptor gene. Apart from neurologic signs, this mutation can cause a partial androgen insensitivity syndrome with typical alterations of gonadotropic hormones produced by the pituitary gland. The aim of the present study was therefore to evaluate the impact of Kennedy disease on pituitary gland volume under the hypothesis that endocrinologic changes caused by partial androgen insensitivity may lead to morphologic changes (ie, hypertrophy) of the pituitary gland. Pituitary gland volume was measured in sagittal sections of 3D T1-weighted 3T-MR imaging data of 8 patients with genetically proven Kennedy disease and compared with 16 healthy age-matched control subjects by use of Multitracer by a blinded, experienced radiologist. The results were analyzed by a univariant ANOVA with total brain volume as a covariant. Furthermore, correlation and linear regression analyses were performed for pituitary volume, patient age, disease duration, and CAG repeat expansion length. Intraobserver reliability was evaluated by means of the Pearson correlation coefficient. Pituitary volume was significantly larger in patients with Kennedy disease (636 [±90] mm(3)) than in healthy control subjects (534 [±91] mm(3)) (P = .041). There was no significant difference in total brain volume (P = .379). Control subjects showed a significant decrease in volume with age (r = -0.712, P = .002), whereas there was a trend to increasing gland volume in patients with Kennedy disease (r = 0.443, P = .272). Gland volume correlated with CAG repeat expansion length in patients (r = 0.630, P = .047). The correlation coefficient for intraobserver reliability was 0.94 (P < .001). Patients with Kennedy disease showed a significantly higher pituitary volume that correlated with the CAG repeat expansion length. This could reflect hypertrophy as the result of elevated gonadotropic hormone secretion caused by the androgen receptor mutation with partial androgen insensitivity.
Possin, Katherine L; Chester, Serana K; Laluz, Victor; Bostrom, Alan; Rosen, Howard J; Miller, Bruce L; Kramer, Joel H
2012-09-01
On tests of design fluency, an examinee draws as many different designs as possible in a specified time limit while avoiding repetition. The neuroanatomical substrates and diagnostic group differences of design fluency repetition errors and total correct scores were examined in 110 individuals diagnosed with dementia, 53 with mild cognitive impairment (MCI), and 37 neurologically healthy controls. The errors correlated significantly with volumes in the right and left orbitofrontal cortex (OFC), the right and left superior frontal gyrus, the right inferior frontal gyrus, and the right striatum, but did not correlate with volumes in any parietal or temporal lobe regions. Regression analyses indicated that the lateral OFC may be particularly crucial for preventing these errors, even after excluding patients with behavioral variant frontotemporal dementia (bvFTD) from the analysis. Total correct correlated more diffusely with volumes in the right and left frontal and parietal cortex, the right temporal cortex, and the right striatum and thalamus. Patients diagnosed with bvFTD made significantly more repetition errors than patients diagnosed with MCI, Alzheimer's disease, semantic dementia, progressive supranuclear palsy, or corticobasal syndrome. In contrast, total correct design scores did not differentiate the dementia patients. These results highlight the frontal-anatomic specificity of design fluency repetitions. In addition, the results indicate that the propensity to make these errors supports the diagnosis of bvFTD. (JINS, 2012, 18, 1-11).
Water displacement leg volumetry in clinical studies - A discussion of error sources
2010-01-01
Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Jing; Read, Paul W.; Baisden, Joseph M.
Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less
Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke
2007-11-01
To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Gated CT imaging using a free-breathing respiration signal from flow-volume spirometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Souza, Warren D.; Kwok, Young; Deyoung, Chad
2005-12-15
Respiration-induced tumor motion is known to cause artifacts on free-breathing spiral CT images used in treatment planning. This leads to inaccurate delineation of target volumes on planning CT images. Flow-volume spirometry has been used previously for breath-holds during CT scans and radiation treatments using the active breathing control (ABC) system. We have developed a prototype by extending the flow-volume spirometer device to obtain gated CT scans using a PQ 5000 single-slice CT scanner. To test our prototype, we designed motion phantoms to compare image quality obtained with and without gated CT scan acquisition. Spiral and axial (nongated and gated) CTmore » scans were obtained of phantoms with motion periods of 3-5 s and amplitudes of 0.5-2 cm. Errors observed in the volume estimate of these structures were as much as 30% with moving phantoms during CT simulation. Application of motion-gated CT with active breathing control reduced these errors to within 5%. Motion-gated CT was then implemented in patients and the results are presented for two clinical cases: lung and abdomen. In each case, gated scans were acquired at end-inhalation, end-exhalation in addition to a conventional free-breathing (nongated) scan. The gated CT scans revealed reduced artifacts compared with the conventional free-breathing scan. Differences of up to 20% in the volume of the structures were observed between gated and free-breathing scans. A comparison of the overlap of structures between the gated and free-breathing scans revealed misalignment of the structures. These results demonstrate the ability of flow-volume spirometry to reduce errors in target volumes via gating during CT imaging.« less
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung
2018-01-01
We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durlin, R.R.; Schaffstall, W.P.
1996-03-01
Volume 2 contains: (1) discharge records for 94 continuous-record streamflow-gaging stations and 39 partial-record stations; (2) elevation and contents records for 12 lakes and reservoirs; (3) water-quality records for 17 gaging stations and 125 partial-record and project stations; and (4) water-level records for 25 observation wells. Additional water data collected at various sites not involved in the systematic data-collection program are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agajanian, J.A.; Rockwell, G.L.; Hayes, P.D.
1996-04-01
Volume 1 contains (1) discharge records for 141 streamflow-gaging stations, 6 crest-stage partial-record streamflow stations; (2) stage and contents records for 20 lakes and reservoirs; (3) water quality records for 21 streamflow-gaging stations and 3 partial-record stations; and (4) precipitation records for 1 station.
NASA Astrophysics Data System (ADS)
Ferrucci, M.; Muralikrishnan, B.; Sawyer, D.; Phillips, S.; Petrov, P.; Yakovlev, Y.; Astrelin, A.; Milligan, S.; Palmateer, J.
2014-10-01
Large volume laser scanners are increasingly being used for a variety of dimensional metrology applications. Methods to evaluate the performance of these scanners are still under development and there are currently no documentary standards available. This paper describes the results of extensive ranging and volumetric performance tests conducted on a large volume laser scanner. The results demonstrated small but clear systematic errors that are explained in the context of a geometric error model for the instrument. The instrument was subsequently returned to the manufacturer for factory calibration. The ranging and volumetric tests were performed again and the results are compared against those obtained prior to the factory calibration.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
... Testing of Certain High Production Volume Chemicals; Second Group of Chemicals; Technical Correction... production volume (HPV) chemical substances to obtain screening level data for health and environmental effects and chemical fate. This document is being issued to correct a typographical error concerning the...
Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.
We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lower-tropospheric CO 2 from near-infrared ACOS-GOSAT observations
Kulawik, Susan S.; O'Dell, Chris; Payne, Vivienne H.; ...
2017-04-27
We present two new products from near-infrared Greenhouse Gases Observing Satellite (GOSAT) observations: lowermost tropospheric (LMT, from 0 to 2.5 km) and upper tropospheric–stratospheric ( U, above 2.5 km) carbon dioxide partial column mixing ratios. We compare these new products to aircraft profiles and remote surface flask measurements and find that the seasonal and year-to-year variations in the new partial column mixing ratios significantly improve upon the Atmospheric CO 2 Observations from Space (ACOS) and GOSAT (ACOS-GOSAT) initial guess and/or a priori, with distinct patterns in the LMT and U seasonal cycles that match validation data. For land monthly averages,more » we find errors of 1.9, 0.7, and 0.8 ppm for retrieved GOSAT LMT, U, and XCO 2; for ocean monthly averages, we find errors of 0.7, 0.5, and 0.5 ppm for retrieved GOSAT LMT, U, and XCO 2. In the southern hemispheric biomass burning season, the new partial columns show similar patterns to MODIS fire maps and MOPITT multispectral CO for both vertical levels, despite a flat ACOS-GOSAT prior, and a CO–CO 2 emission factor comparable to published values. The difference of LMT and U, useful for evaluation of model transport error, has also been validated with a monthly average error of 0.8 (1.4) ppm for ocean (land). LMT is more locally influenced than U, meaning that local fluxes can now be better separated from CO 2 transported from far away.« less
NASA Astrophysics Data System (ADS)
Penn, C. A.; Clow, D. W.; Sexstone, G. A.
2017-12-01
Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.
An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int
An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-formmore » schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.« less
ERIC Educational Resources Information Center
Inzlicht, Michael; Al-Khindi, Timour
2012-01-01
Performance monitoring in the anterior cingulate cortex (ACC) has largely been viewed as a cognitive, computational process devoid of emotion. A growing body of research, however, suggests that performance is moderated by motivational engagement and that a signal generated by the ACC, the error-related negativity (ERN), may partially reflect a…
Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.
ERIC Educational Resources Information Center
Goodrich, Gregory L.; And Others
1979-01-01
A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)
Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc
2009-12-01
Cardiac CT achieves its high temporal resolution by lowering the scan range from 2pi to pi plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the pi range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2pi] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan pn(AF) by projectionwise averaging a set of neighboring partial scans pn(P) from the same perfusion examination (typically N approximately 30 phase-correlated partial scans distributed over 20 s and n = 1, ..., N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans pn(V) from the artificial full scan pn(AF). A standard reconstruction yields the corresponding images fn(P), fn(AF), and fn(V). Subtracting the virtual partial scan image fn(V) from the artificial full scan image fn(AF) yields an artifact image that can be used to correct the original partial scan image: fn(C) = fn(P) - fn(V) + fn(AF), where fn(C) is the corrected image. The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the corrected scans is up to 54% for the simulations and 90% for the measurements. The phase-correlated data now appear accurate enough for a quantitative analysis of cardiac perfusion.
Predicted Errors In Children's Early Sentence Comprehension
Gertner, Yael; Fisher, Cynthia
2012-01-01
Children use syntax to interpret sentences and learn verbs; this is syntactic bootstrapping. The structure-mapping account of early syntactic bootstrapping proposes that a partial representation of sentence structure, the set of nouns occurring with the verb, guides initial interpretation and provides an abstract format for new learning. This account predicts early successes, but also telltale errors: Toddlers should be unable to tell transitive sentences from other sentences containing two nouns. In testing this prediction, we capitalized on evidence that 21-month-olds use what they have learned about noun order in English sentences to understand new transitive verbs. In two experiments, 21-month-olds applied this noun-order knowledge to two-noun intransitive sentences, mistakenly assigning different interpretations to “The boy and the girl are gorping!” and “The girl and the boy are gorping!”. This suggests that toddlers exploit partial representations of sentence structure to guide sentence interpretation; these sparse representations are useful, but error-prone. PMID:22525312
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Torigian, Drew A; Lopez, Rosa Fernandez; Alapati, Sridevi; Bodapati, Geetha; Hofheinz, Frank; van den Hoff, Joerg; Saboury, Babak; Alavi, Abass
2011-01-01
Our aim was to assess feasibility and performance of novel semi-automated image analysis software called ROVER to quantify metabolically active volume (MAV), maximum standardized uptake value-maximum (SUV(max)), 3D partial volume corrected mean SUV (cSUV(mean)), and 3D partial volume corrected mean MVP (cMVP(mean)) of spinal bone marrow metastases on fluorine-18 fluorodeoxyglucose-positron emission tomography/computerized tomography ((18)F-FDG-PET/CT). We retrospectively studied 16 subjects with 31 spinal metastases on FDG-PET/CT and MRI. Manual and ROVER determinations of lesional MAV and SUV(max), and repeated ROVER measurements of MAV, SUV(max), cSUV(mean) and cMVP(mean) were made. Bland-Altman and correlation analyses were performed to assess reproducibility and agreement. Our results showed that analyses of repeated ROVER measurements revealed MAV mean difference (D)=-0.03±0.53cc (95% CI(-0.22, 0.16)), lower limit of agreement (LLOA)=-1.07cc, and upper limit of agreement (ULOA)=1.01cc; SUV(max) D=0.00±0.00 with LOAs=0.00; cSUV(mean) D=-0.01±0.39 (95% CI(-0.15, 0.13)), LLOA=-0.76, and ULOA=0.75; cMVP(mean) D=-0.52±4.78cc (95% CI(-2.23, 1.23)), LLOA=-9.89cc, and ULOA=8.86cc. Comparisons between ROVER and manual measurements revealed volume D= -0.39±1.37cc (95% CI (-0.89, 0.11)), LLOA=-3.08cc, and ULOA=2.30cc; SUV(max) D=0.00±0.00 with LOAs=0.00. Mean percent increase in lesional SUV(mean) and MVP(mean) following partial volume correction using ROVER was 84.25±36.00% and 84.45±35.94% , respectively. In conclusion, it is feasible to estimate MAV, SUV(max), cSUV(mean), and cMVP(mean) of spinal bone marrow metastases from (18)F-FDG-PET/CT quickly and easily with good reproducibility via ROVER software. Partial volume correction is imperative, as uncorrected SUV(mean) and MVP(mean) are significantly underestimated, even for large lesions. This novel approach has great potential for practical, accurate, and precise combined structural-functional PET quantification of disease before and after therapeutic intervention.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
... section 110(c)(1)(B), to promulgate a FIP within 2 years, and, as part of this rulemaking, EPA is... must promulgate a FIP at any time within 2 years after the disapproval, unless the state corrects the... any time within 2 years after the [finding] * * * unless the State corrects the deficiency, and [EPA...
Calibration Of Partial-Pressure-Of-Oxygen Sensors
NASA Technical Reports Server (NTRS)
Yount, David W.; Heronimus, Kevin
1995-01-01
Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-30
...) for Texas. This action is based on EPA's determination that Texas's PSD program is flawed because the...://www.regulations.gov Web site is an ``anonymous access'' system, which means EPA will not know your... open from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal holidays. The telephone number...
Physiological Effects of Training.
1985-06-25
applies only to short-term programs, the resting heart rate is norrally reduced as a result of aerobic training in all age groups. I0 Studies with ...in order to maintain cardiao output in conjunction with a decreased heart rate, stroke volume has to Increase. Stroke volume increases in the...volume is partially due too increased end diastolic volume. Thus, the pumping ability of the heart , I.e. increased stroke volume, is improved with
Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models
NASA Astrophysics Data System (ADS)
Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.
2016-06-01
Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
A Bibliography of Recreational Mathematics, Volume 3.
ERIC Educational Resources Information Center
Schaaf, William L.
This book is a partially annotated bibliography of books, articles and periodicals concerned with mathematical games, puzzles, tricks, amusements, and paradoxes. Because the literature in recreational mathematics has proliferated to amazing proportions since Volume 2 of this series (ED 040 874), Volume 3 is more than just an updating of the…
B{sub K} with two flavors of dynamical overlap fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoki, S.; Riken BNL Research Center, Brookhaven National Laboratory, Upton, New York 11973; Fukaya, H.
2008-05-01
We present a two-flavor QCD calculation of B{sub K} on a 16{sup 3}x32 lattice at a{approx}0.12 fm (or equivalently a{sup -1}=1.67 GeV). Both valence and sea quarks are described by the overlap fermion formulation. The matching factor is calculated nonperturbatively with the so-called RI/MOM scheme. We find that the lattice data are well described by the next-to-leading order (NLO) partially quenched chiral perturbation theory (PQChPT) up to around a half of the strange quark mass (m{sub s}{sup phys}/2). The data at quark masses heavier than m{sub s}{sup phys}/2 are fitted including a part of next-to-next-to-leading order terms. We obtain B{submore » K}{sup MS}(2 GeV)=0.537(4)(40), where the first error is statistical and the second is an estimate of systematic uncertainties from finite volume, fixing topology, the matching factor, and the scale setting.« less
Partial Molar Volumes of 15-Crown-5 Ether in Mixtures of N,N-Dimethylformamide with Water.
Tyczyńska, Magdalena; Jóźwiak, Małgorzata
2014-01-01
The density of 15-crown-5 ether (15C5) solutions in the mixtures of N,N -dimethylformamide (DMF) and water (H 2 O) was measured within the temperature range 293.15-308.15 K using an Anton Paar oscillatory U-tube densimeter. The results were used to calculate the apparent molar volumes ( V Φ ) of 15C5 in the mixtures of DMF + H 2 O over the whole concentration range. Using the apparent molar volumes and Redlich and Mayer equation, the standard partial molar volumes of 15-crown-5 were calculated at infinite dilution ([Formula: see text]). The limiting apparent molar expansibilities ( α ) were also calculated. The data are discussed from the point of view of the effect of concentration changes on interactions in solution.
On a relationship between molecular polarizability and partial molar volume in water.
Ratkova, Ekaterina L; Fedorov, Maxim V
2011-12-28
We reveal a universal relationship between molecular polarizability (a single-molecule property) and partial molar volume in water that is an ensemble property characterizing solute-solvent systems. Since both of these quantities are of the key importance to describe solvation behavior of dissolved molecular species in aqueous solutions, the obtained relationship should have a high impact in chemistry, pharmaceutical, and life sciences as well as in environments. We demonstrated that the obtained relationship between the partial molar volume in water and the molecular polarizability has in general a non-homogeneous character. We performed a detailed analysis of this relationship on a set of ~200 organic molecules from various chemical classes and revealed its fine well-organized structure. We found that this structure strongly depends on the chemical nature of the solutes and can be rationalized in terms of specific solute-solvent interactions. Efficiency and universality of the proposed approach was demonstrated on an external test set containing several dozens of polyfunctional and druglike molecules.
Determination of free CO2 in emergent groundwaters using a commercial beverage carbonation meter
NASA Astrophysics Data System (ADS)
Vesper, Dorothy J.; Edenborn, Harry M.
2012-05-01
SummaryDissolved CO2 in groundwater is frequently supersaturated relative to its equilibrium with atmospheric partial pressure and will degas when it is conveyed to the surface. Estimates of dissolved CO2 concentrations can vary widely between different hydrochemical facies because they have different sources of error (e.g., rapid degassing, low alkalinity, non-carbonate alkalinity). We sampled 60 natural spring and mine waters using a beverage industry carbonation meter, which measures dissolved CO2 based on temperature and pressure changes as the sample volume is expanded. Using a modified field protocol, the meter was found to be highly accurate in the range 0.2-35 mM CO2. The meter provided rapid, accurate and precise measurements of dissolved CO2 in natural waters for a range of hydrochemical facies. Dissolved CO2 concentrations measured in the field with the carbonation meter were similar to CO2 determined using the pH-alkalinity approach, but provided immediate results and avoided errors from alkalinity and pH determination. The portability and ease of use of the carbonation meter in the field made it well-suited to sampling in difficult terrain. The carbonation meter has proven useful in the study of aquatic systems where CO2 degassing drives geochemical changes that result in surficial mineral precipitation and deposition, such as tufa, travertine and mine drainage deposits.
Jang, Hojin; Plis, Sergey M.; Calhoun, Vince D.; Lee, Jong-Hwan
2016-01-01
Feedforward deep neural networks (DNN), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean ± standard deviation; %) of 6.9 (± 3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4 ± 4.6) and the two-layer network (7.4 ± 4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. PMID:27079534
Jang, Hojin; Plis, Sergey M; Calhoun, Vince D; Lee, Jong-Hwan
2017-01-15
Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Xu, Z N
2014-12-01
In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Cubic-Foot Volume Tables for Shortleaf Pine in the Virginia-Carolina Piedmont
Glenn P. Haney; Paul P. Kormanik
1962-01-01
Available volume tables for shortleaf pine are based on merchantable height and do not show volumes to the small top diameter limits now used in many areas. Volume tables based on total height are also often preferred because they eliminate the error associated with ocular estimates of merchantable height. The table presented here for natural shortleaf pine is based on...
Accurately determining log and bark volumes of saw logs using high-resolution laser scan data
R. Edward Thomas; Neal D. Bennett
2014-01-01
Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...
System for detecting operating errors in a variable valve timing engine using pressure sensors
Wiles, Matthew A.; Marriot, Craig D
2013-07-02
A method and control module includes a pressure sensor data comparison module that compares measured pressure volume signal segments to ideal pressure volume segments. A valve actuation hardware remedy module performs a hardware remedy in response to comparing the measured pressure volume signal segments to the ideal pressure volume segments when a valve actuation hardware failure is detected.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Diversity Order Analysis of Dual-Hop Relaying with Partial Relay Selection
NASA Astrophysics Data System (ADS)
Bao, Vo Nguyen Quoc; Kong, Hyung Yun
In this paper, we study the performance of dual hop relaying in which the best relay selected by partial relay selection will help the source-destination link to overcome the channel impairment. Specifically, closed-form expressions for outage probability, symbol error probability and achievable diversity gain are derived using the statistical characteristic of the signal-to-noise ratio. Numerical investigation shows that the system achieves diversity of two regardless of relay number and also confirms the correctness of the analytical results. Furthermore, the performance loss due to partial relay selection is investigated.
Nkenke, Emeka; Lehner, Bernhard; Kramer, Manuel; Haeusler, Gerd; Benz, Stefanie; Schuster, Maria; Neukam, Friedrich W; Vairaktaris, Eleftherios G; Wurm, Jochen
2006-03-01
To assess measurement errors of a novel technique for the three-dimensional determination of the degree of facial symmetry in patients suffering from unilateral cleft lip and palate malformations. Technical report, reliability study. Cleft Lip and Palate Center of the University of Erlangen-Nuremberg, Erlangen, Germany. The three-dimensional facial surface data of five 10-year-old unilateral cleft lip and palate patients were subjected to the analysis. Distances, angles, surface areas, and volumes were assessed twice. Calculations were made for method error, intraclass correlation coefficient, and repeatability of the measurements of distances, angles, surface areas, and volumes. The method errors were less than 1 mm for distances and less than 1.5 degrees for angles. The intraclass correlation coefficients showed values greater than .90 for all parameters. The repeatability values were comparable for cleft and noncleft sides. The small method errors, high intraclass correlation coefficients, and comparable repeatability values for cleft and noncleft sides reveal that the new technique is appropriate for clinical use.
Petrungaro, Paul S; Gonzalez, Santiago; Villegas, Carlos
2018-02-01
As dental implants become more popular for the treatment of partial and total edentulism and treatment of "terminal dentitions," techniques for the management of the atrophic posterior maxillae continue to evolve. Although dental implants carry a high success rate long term, attention must be given to the growing numbers of revisions or retreatment of cases that have had previous dental implant treatment and/or advanced bone replacement procedures that, due to either poor patient compliance, iatrogenic error, or poor quality of the pre-existing alveolar and/or soft tissues, have led to large osseous defects, possibly with deficient soft-tissue volume. In the posterior maxillae, where the poorest quality of bone in the oral cavity exists, achieving regeneration of the alveolar bone and adequate volume of soft tissue remains a complex procedure. This is made even more difficult when dealing with loss of dental implants previously placed, aggressive bone reduction required in various implant procedures, and/or residual sinus infections precluding proper closure of the oral wound margins. The purpose of this article is to outline a technique for the total closure of large oro-antral communications, with underlying osseous defects greater than 15 mm in width and 30 mm in length, for which multiple previous attempts at closure had failed, to achieve not only the reconstruction of adequate volume and quality of soft tissues in the area of the previous fistula, but also total regeneration of the osseous structures in the area of the large void.
Mirjankar, Nikhil S; Fraga, Carlos G; Carman, April J; Moran, James J
2016-02-02
Chemical attribution signatures (CAS) for chemical threat agents (CTAs), such as cyanides, are being investigated to provide an evidentiary link between CTAs and specific sources to support criminal investigations and prosecutions. Herein, stocks of KCN and NaCN were analyzed for trace anions by high performance ion chromatography (HPIC), carbon stable isotope ratio (δ(13)C) by isotope ratio mass spectrometry (IRMS), and trace elements by inductively coupled plasma optical emission spectroscopy (ICP-OES). The collected analytical data were evaluated using hierarchical cluster analysis (HCA), Fisher-ratio (F-ratio), interval partial least-squares (iPLS), genetic algorithm-based partial least-squares (GAPLS), partial least-squares discriminant analysis (PLSDA), K nearest neighbors (KNN), and support vector machines discriminant analysis (SVMDA). HCA of anion impurity profiles from multiple cyanide stocks from six reported countries of origin resulted in cyanide samples clustering into three groups, independent of the associated alkali metal (K or Na). The three groups were independently corroborated by HCA of cyanide elemental profiles and corresponded to countries each having one known solid cyanide factory: Czech Republic, Germany, and United States. Carbon stable isotope measurements resulted in two clusters: Germany and United States (the single Czech stock grouped with United States stocks). Classification errors for two validation studies using anion impurity profiles collected over five years on different instruments were as low as zero for KNN and SVMDA, demonstrating the excellent reliability associated with using anion impurities for matching a cyanide sample to its factory using our current cyanide stocks. Variable selection methods reduced errors for those classification methods having errors greater than zero; iPLS-forward selection and F-ratio typically provided the lowest errors. Finally, using anion profiles to classify cyanides to a specific stock or stock group for a subset of United States stocks resulted in cross-validation errors ranging from 0 to 5.3%.
On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation
NASA Astrophysics Data System (ADS)
Ouwerkerk, Ronald; Bottomley, Paul A.
2001-02-01
Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.
PROCEEDINGS: SYMPOSIUM ON FLUE GAS DESULFURIZATION HELD AT LAS VEGAS, NEVADA, MARCH 1979; VOLUME II
The publication, in two volumes, contains the text of all papers presented at EPA's fifth flue gas desulfurization (FGD) symposium, March 5-8, 1979, at Las Vegas, Nevada. A partial listing of papers in Volume 2 includes the following: Basin Electric's involvement with dry flue ga...
Introduction to total- and partial-pressure measurements in vacuum systems
NASA Technical Reports Server (NTRS)
Outlaw, R. A.; Kern, F. A.
1989-01-01
An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.
Detection of melting by X-ray imaging at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Li; Weidner, Donald J.
2014-06-15
The occurrence of partial melting at elevated pressure and temperature is documented in real time through measurement of volume strain induced by a fixed temperature change. Here we present the methodology for measuring volume strains to one part in 10{sup −4} for mm{sup 3} sized samples in situ as a function of time during a step in temperature. By calibrating the system for sample thermal expansion at temperatures lower than the solidus, the onset of melting can be detected when the melting volume increase is of comparable size to the thermal expansion induced volume change. We illustrate this technique withmore » a peridotite sample at 1.5 GPa during partial melting. The Re capsule is imaged with a CCD camera at 20 frames/s. Temperature steps of 100 K induce volume strains that triple with melting. The analysis relies on image comparison for strain determination and the thermal inertia of the sample is clearly seen in the time history of the volume strain. Coupled with a thermodynamic model of the melting, we infer that we identify melting with 2 vol.% melting.« less
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Optimizing pattern recognition-based control for partial-hand prosthesis application.
Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J
2014-01-01
Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (p<0.001); and that training in only variable wrist positions, with only dynamic wrist movements, or with both variable wrist positions and movements results in lower error rates than training in only the neutral wrist position (p<0.001). Finally, our results show that both an increase in window length and a decrease in the number of grasps available to the classifier significantly decrease classification error (p<0.001). These results remained consistent whether the classifier selected or maintained a hand-grasp.
da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F
2016-03-01
An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.
Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing
2017-01-01
Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the ‘gold-standard’ volumes reconstructed from fully-sampled projections. Results For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique and the ADRIC technique were 14.64%, 19.21% and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04% and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, 6.80% by using 20 projections; and 12.14%, 6.91% and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new ‘gold-standard’ balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. Conclusion The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation and treatment outcome analysis. PMID:28380247
NASA Astrophysics Data System (ADS)
Achsah, R. S.; Shyam, S.; Mayuri, N.; Anantharaj, R.
2018-04-01
Deep eutectic solvents (DES) and ionic liquids (ILs) have their applications in various fields of research and in industries due to their attractive physiochemical properties. In this study, the combined thermodynamic properties of DES (choline chloride-glycerol) + IL1 (1-butyl-3-methylimiazolium acetate) and DES(choline chloride-glycerol) + IL2 (1-ethyl-3-methylimadzolium ethyl sulphate) have been studied. The thermodynamic properties such as excess molar volume, partial molar volume, excess partial molar volume and apparent molar volume were calculated for different mole fractions ranging from 0 to 1 and varying temperatures from 293.15 K to 343.15 K. In order to know the solvent properties of DESs and ILs mixtures at different temperatures and their molecular interactions to enhance the solvent performance and process efficiency at fixed composition and temperature the thermodynamic properties were analyzed.
Consistency in seroma contouring for partial breast radiotherapy: Impact of guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Elaine K.; Truong, Pauline T.; Kader, Hosam A.
2006-10-01
Purpose: Inconsistencies in contouring target structures can undermine the precision of conformal radiation therapy (RT) planning and compromise the validity of clinical trial results. This study evaluated the impact of guidelines on consistency in target volume contouring for partial breast RT planning. Methods and Materials: Guidelines for target volume definition for partial breast radiation therapy (PBRT) planning were developed by members of the steering committee for a pilot trial of PBRT using conformal external beam planning. In phase 1, delineation of the breast seroma in 5 early-stage breast cancer patients was independently performed by a 'trained' cohort of four radiationmore » oncologists who were provided with these guidelines and an 'untrained' cohort of four radiation oncologists who contoured without guidelines. Using automated planning software, the seroma target volume (STV) was expanded into a clinical target volume (CTV) and planning target volume (PTV) for each oncologist. Means and standard deviations were calculated, and two-tailed t tests were used to assess differences between the 'trained' and 'untrained' cohorts. In phase 2, all eight radiation oncologists were provided with the same contouring guidelines, and were asked to delineate the seroma in five new cases. Data were again analyzed to evaluate consistency between the two cohorts. Results: The 'untrained' cohort contoured larger seroma volumes and had larger CTVs and PTVs compared with the 'trained' cohort in three of five cases. When seroma contouring was performed after review of contouring guidelines, the differences in the STVs, CTVs, and PTVs were no longer statistically significant. Conclusion: Guidelines can improve consistency among radiation oncologists performing target volume delineation for PBRT planning.« less
The error analysis of Lobular and segmental division of right liver by volume measurement.
Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin
2017-07-01
The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Force estimation from OCT volumes using 3D CNNs.
Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander
2018-07-01
Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Immediate Partial Breast Reconstruction with Endoscopic Latissimus Dorsi Muscle Flap Harvest
Yang, Chae Eun; Roh, Tai Suk; Yun, In Sik; Lew, Dae Hyun
2014-01-01
Background Currently, breast conservation therapy is commonly performed for the treatment of early breast cancer. Depending on the volume excised, patients may require volume replacement, even in cases of partial mastectomy. The use of the latissimus dorsi muscle is the standard method, but this procedure leaves an unfavorable scar on the donor site. We used an endoscope for latissimus dorsi harvesting to minimize the incision, thus reducing postoperative scars. Methods Ten patients who underwent partial mastectomy and immediate partial breast reconstruction with endoscopic latissimus dorsi muscle flap harvest were reviewed retrospectively. The total operation time, hospital stay, and complications were reviewed. Postoperative scarring, overall shape of the reconstructed breast, and donor site deformity were assessed using a 10-point scale. Results In the mean follow-up of 11 weeks, no tumor recurrence was reported. The mean operation time was 294.5 (±38.2) minutes. The postoperative hospital stay was 11.4 days. Donor site seroma was reported in four cases and managed by office aspiration and compressive dressing. Postoperative scarring, donor site deformity, and the overall shape of the neobreast were acceptable, scoring above 7. Conclusions Replacement of 20% to 40% of breast volume in the upper and the lower outer quadrants with a latissimus dorsi muscle flap by using endoscopic harvesting is a good alternative reconstruction technique after partial mastectomy. Short incision benefits from a very acceptable postoperative scar, less pain, and early upper extremity movement. PMID:25276643
Absorbance and fluorometric sensing with capillary wells microplates.
Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian; Liew, Oi Wah; Ng, Tuck Wah
2010-12-01
Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly in the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.
Daugirdas, John T
2017-07-01
The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Quantitative Modelling of Trace Elements in Hard Coal.
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.
Quantitative Modelling of Trace Elements in Hard Coal
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David
2014-05-15
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less
Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.
2014-01-01
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380
Ren, Jiayin; Zhou, Zhongwei; Li, Peng; Tang, Wei; Guo, Jixiang; Wang, Hu; Tian, Weidong
2016-09-01
This study aimed to evaluate an innovative workflow for maxillofacial fracture surgery planning and surgical splint designing. The maxillofacial multislice computerized tomography (MSCT) data and dental cone beam computerized tomography (CBCT) data both were obtained from 40 normal adults and 58 adults who suffered fractures. The each part of the CBCT dentition image was registered into MSCT image by the use of the iterative closest point algorithm. Volume evaluation of the virtual splints that were designed by the registered MSCT images and MSCT images of the same object was performed. Eighteen patients (group 1) were operated without any splint. Twenty-one (group 2) and 19 patients (group 3) used the splints designed according to the MSCT images and registered MSCT images, respectively. The authors' results showed that the mean errors between the 2 models ranged from 0.53 to 0.92 mm and the RMS errors ranged from 0.38 to 0.69 mm in fracture patients. The mean errors between the 2 models ranged from 0.47 to 0.85 mm and the RMS errors ranged from 0.33 to 0.71 mm in normal adults. 72.22% patients in group 1 recovered occlusion. 85.71% patients in group 2, and 94.73% patients in group 3 reconstructed occlusion. There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients (P <0.05). The MSCT images based splints' volume was statistically significantly distinct from the registered MSCT splints' volume in normal adults (P <0.05). There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients and normal adults (P <0.05). The occlusion recovery rate of group 3 was better than that of group 1 and group 2. The way of integrating CBCT images into MSCT images for splints designing was feasible. The volume of the splints designed by MSCT images tended to be smaller than the splints designed by the integrated MSCT images. The patients operated with splints tended to regain occlusion. The patients who were operated with the splints which were designed according to registered MSCT images tended to get occlusal recovered.
Algebraic Methods to Design Signals
2015-08-27
sequence pairs with optimal correlation values. 5. K.T. Arasu, Pradeep Bansal , Cody Watson, Partially balanced incomplete block designs with two...IEEE Transactions Information Theory, Volume: 58, Issue: 11, Nov 2012, Page(s): 6968 – 6978 5. K.T. Arasu, Pradeep Bansal , Cody Watson, Partially
Partial Updating of TSCA Inventory DataBase; Production and Site Reports; Final Rule
A partial updating of the TSCA inventory database. The final rule requires manufacturers and importers of certain chemical substances included on the TSCA Chemical Substances Inventory to report current data on the production volume, plant site, etc.
Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.
Wang, Yibin; Nedelman, Jerry
2002-04-01
To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donovan, Ellen M., E-mail: ellen.donovan@icr.ac.u; Ciurlionis, Laura; Fairfoul, Jamie
Purpose: To establish planning solutions for a concomitant three-level radiation dose distribution to the breast using linear accelerator- or tomotherapy-based intensity-modulated radiotherapy (IMRT), for the U.K. Intensity Modulated and Partial Organ (IMPORT) High trial. Methods and Materials: Computed tomography data sets for 9 patients undergoing breast conservation surgery with implanted tumor bed gold markers were used to prepare three-level dose distributions encompassing the whole breast (36 Gy), partial breast (40 Gy), and tumor bed boost (48 or 53 Gy) treated concomitantly in 15 fractions within 3 weeks. Forward and inverse planned IMRT and tomotherapy were investigated as solutions. A standardmore » electron field was compared with a photon field arrangement encompassing the tumor bed boost volume. The out-of-field doses were measured for all methods. Results: Dose-volume constraints of volume >90% receiving 32.4 Gy and volume >95% receiving 50.4 Gy for the whole breast and tumor bed were achieved. The constraint of volume >90% receiving 36 Gy for the partial breast was fulfilled in the inverse IMRT and tomotherapy plans and in 7 of 9 cases of a forward planned IMRT distribution. An electron boost to the tumor bed was inadequate in 8 of 9 cases. The IMRT methods delivered a greater whole body dose than the standard breast tangents. A contralateral lung volume >2.5 Gy was increased in the inverse IMRT and tomotherapy plans, although it did not exceed the constraint. Conclusion: We have demonstrated a set of widely applicable solutions that fulfilled the stringent clinical trial requirements for the delivery of a concomitant three-level dose distribution to the breast.« less
Defining the Role of Free Flaps in Partial Breast Reconstruction.
Smith, Mark L; Molina, Bianca J; Dayan, Erez; Jablonka, Eric M; Okwali, Michelle; Kim, Julie N; Dayan, Joseph H
2018-03-01
Free flaps have a well-established role in breast reconstruction after mastectomy; however, their role in partial breast reconstruction remains poorly defined. We reviewed our experience with partial breast reconstruction to better understand indications for free tissue transfer. A retrospective review was performed of all patients undergoing partial breast reconstruction at our center between February 2009 and October 2015. We evaluated the characteristics of patients who underwent volume displacement procedures versus volume replacement procedures and free versus pedicled flap reconstruction. There were 78 partial breast reconstructions, with 52 reductions/tissue rearrangements (displacement group) and 26 flaps (replacement group). Bra cup size and body mass index (BMI) were significantly smaller in the replacement group. Fifteen pedicled and 11 free flaps were performed. Most pedicled flaps (80.0%) were used for lateral or upper pole defects. Most free flaps (72.7%) were used for medial and inferior defects or when there was inadequate donor tissue for a pedicled flap. Complications included hematoma, cellulitis, and one aborted pedicled flap. Free and pedicled flaps are useful for partial breast reconstruction, particularly in breast cancer patients with small breasts undergoing breast-conserving treatment (BCT). Flap selection depends on defect size, location, and donor tissue availability. Medial defects are difficult to reconstruct using pedicled flaps due to arc of rotation and intervening breast tissue. Free tissue transfer can overcome these obstacles. Confirming negative margins before flap reconstruction ensures harvest of adequate volume and avoids later re-operation. Judicious use of free flaps for oncoplastic reconstruction expands the possibility for breast conservation. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
White matter atlas of the human spinal cord with estimation of partial volume effect.
Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J
2015-10-01
Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial volume effect. Combined with multi-parametric data, the atlas can be applied to study demyelination and degeneration in diseases such as multiple sclerosis and will facilitate the conduction of longitudinal and multi-center studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Partial drift volume due to a self-propelled swimmer
NASA Astrophysics Data System (ADS)
Chisholm, Nicholas G.; Khair, Aditya S.
2018-01-01
We assess the ability of a self-propelled swimmer to displace a volume of fluid that is large compared to its own volume via the mechanism of partial drift. The swimmer performs rectilinear locomotion in an incompressible, unbounded Newtonian fluid. The partial drift volume D is the volume of fluid enclosed between the initial and final profiles of an initially flat circular disk of marked fluid elements; the disk is initially aligned perpendicular to the direction of locomotion and subsequently distorted due to the passage of the swimmer, which travels a finite distance. To focus on the possibility of large-scale drift, we model the swimmer simply as a force dipole aligned with the swimming direction. At zero Reynolds number (Re =0 ), we demonstrate that D grows without limit as the radius of the marked fluid disk h is made large, indicating that a swimmer at Re =0 can generate a partial drift volume much larger than its own volume. Next, we consider a steady swimmer at small Re , which is modeled as the force-dipole solution to Oseen's equation. Here, we find that D no longer diverges with h , which is due to inertial screening of viscous forces, and is effectively proportional to the magnitude of the force dipole exerted by the swimmer. The validity of this result is extended to Re ≥O (1 ) —the realm of intermediate-Re swimmers such as copepods—by taking advantage of the fact that, in this case, the flow is also described by Oseen's equations at distances much larger than the characteristic linear dimension of the swimmer. Next, we utilize an integral momentum balance to demonstrate that our analysis for a steady inertial swimmer also holds, in a time-averaged sense, for an unsteady swimmer that does not experience a net acceleration over a stroke cycle. Finally, we use experimental data to estimate D for a few real swimmers. Interestingly, we find that D depends heavily on the kinematics of swimming, and, in certain cases, D can be significantly greater than the volume of the swimmer at Re ≥O (1 ) . Our work also highlights that D due to a self-propelled body is fundamentally different than that due to a body towed by an external force. In particular, predictions of D in the latter case cannot be utilized to estimate D for a self-propelled swimmer.
ERIC Educational Resources Information Center
Hirshfeld, Marvin; Leventhal, Jerome I.
Volume 1 of the two-volume annotated bibliography provides a partial listing of available materials for curriculum and instructional enrichment in distributive education. The grouping of all materials was made according to the U. S. Office of Education Classification of Instructional Programs for Distributive Education. Alphabetized by title under…
ERIC Educational Resources Information Center
Hirshfeld, Marvin; Leventhal, Jerome I.
Volume 2 of the two-volume annotated bibliography provides a partial listing of available materials for curriculum and instructional enrichment in distributive education. The grouping of all materials was made according to the U. S. Office of Education Classification of Instructional Programs for Distributive Education. Alphabetized by title under…
Adams, C N; Kattawar, G W
1993-08-20
We have developed a Monte Carlo program that is capable of calculating both the scalar and the Stokes vector radiances in an atmosphere-ocean system in a single computer run. The correlated sampling technique is used to compute radiance distributions for both the scalar and the Stokes vector formulations simultaneously, thus permitting a direct comparison of the errors induced. We show the effect of the volume-scattering phase function on the errors in radiance calculations when one neglects polarization effects. The model used in this study assumes a conservative Rayleigh-scattering atmosphere above a flat ocean. Within the ocean, the volume-scattering function (the first element in the Mueller matrix) is varied according to both a Henyey-Greenstein phase function, with asymmetry factors G = 0.0, 0.5, and 0.9, and also to a Rayleigh-scattering phase function. The remainder of the reduced Mueller matrix for the ocean is taken to be that for Rayleigh scattering, which is consistent with ocean water measurement.
Dotette: Programmable, high-precision, plug-and-play droplet pipetting.
Fan, Jinzhen; Men, Yongfan; Hao Tseng, Kuo; Ding, Yi; Ding, Yunfeng; Villarreal, Fernando; Tan, Cheemeng; Li, Baoqing; Pan, Tingrui
2018-05-01
Manual micropipettes are the most heavily used liquid handling devices in biological and chemical laboratories; however, they suffer from low precision for volumes under 1 μ l and inevitable human errors. For a manual device, the human errors introduced pose potential risks of failed experiments, inaccurate results, and financial costs. Meanwhile, low precision under 1 μ l can cause severe quantification errors and high heterogeneity of outcomes, becoming a bottleneck of reaction miniaturization for quantitative research in biochemical labs. Here, we report Dotette, a programmable, plug-and-play microfluidic pipetting device based on nanoliter liquid printing. With automated control, protocols designed on computers can be directly downloaded into Dotette, enabling programmable operation processes. Utilizing continuous nanoliter droplet dispensing, the precision of the volume control has been successfully improved from traditional 20%-50% to less than 5% in the range of 100 nl to 1000 nl. Such a highly automated, plug-and-play add-on to existing pipetting devices not only improves precise quantification in low-volume liquid handling and reduces chemical consumptions but also facilitates and automates a variety of biochemical and biological operations.
Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P
2010-08-07
Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.
Temporal and spatial resolution required for imaging myocardial function
NASA Astrophysics Data System (ADS)
Eusemann, Christian D.; Robb, Richard A.
2004-05-01
4-D functional analysis of myocardial mechanics is an area of significant interest and research in cardiology and vascular/interventional radiology. Current multidimensional analysis is limited by insufficient temporal resolution of x-ray and magnetic resonance based techniques, but recent improvements in system design holds hope for faster and higher resolution scans to improve images of moving structures allowing more accurate functional studies, such as in the heart. This paper provides a basis for the requisite temporal and spatial resolution for useful imaging during individual segments of the cardiac cycle. Multiple sample rates during systole and diastole are compared to determine an adequate sample frequency to reduce regional myocardial tracking errors. Concurrently, out-of-plane resolution has to be sufficiently high to minimize partial volume effect. Temporal resolution and out-of-plane spatial resolution are related factors that must be considered together. The data used for this study is a DSR dynamic volume image dataset with high temporal and spatial resolution using implanted fiducial markers to track myocardial motion. The results of this study suggest a reduced exposure and scan time for x-ray and magnetic resonance imaging methods, since a lower sample rate during systole is sufficient, whereas the period of rapid filling during diastole requires higher sampling. This could potentially reduce the cost of these procedures and allow higher patient throughput.
Underlying Information Technology Tailored Quantum Error Correction
2006-07-28
typically constructed by using an optical beam splitter . • We used a decoherence-free-subspace encoding to reduce the sensitivity of an optical Deutsch...simplification of design constraints in solid state QC (incl. quantum dots and superconducting qubits), hybrid quantum error correction and prevention methods...process tomography on one- and two-photon polarisation states, from full and partial data "• Accomplished complete two-photon QPT. "• Discovered surprising
Finite-volume and partial quenching effects in the magnetic polarizability of the neutron
NASA Astrophysics Data System (ADS)
Hall, J. M. M.; Leinweber, D. B.; Young, R. D.
2014-03-01
There has been much progress in the experimental measurement of the electric and magnetic polarizabilities of the nucleon. Similarly, lattice QCD simulations have recently produced dynamical QCD results for the magnetic polarizability of the neutron approaching the chiral regime. In order to compare the lattice simulations with experiment, calculation of partial quenching and finite-volume effects is required prior to an extrapolation in quark mass to the physical point. These dependencies are described using chiral effective field theory. Corrections to the partial quenching effects associated with the sea-quark-loop electric charges are estimated by modeling corrections to the pion cloud. These are compared to the uncorrected lattice results. In addition, the behavior of the finite-volume corrections as a function of pion mass is explored. Box sizes of approximately 7 fm are required to achieve a result within 5% of the infinite-volume result at the physical pion mass. A variety of extrapolations are shown at different box sizes, providing a benchmark to guide future lattice QCD calculations of the magnetic polarizabilities. A relatively precise value for the physical magnetic polarizability of the neutron is presented, βn=1.93(11)stat(11)sys×10-4 fm3, which is in agreement with current experimental results.
Alzheimer's disease detection using 11C-PiB with improved partial volume effect correction
NASA Astrophysics Data System (ADS)
Raniga, Parnesh; Bourgeat, Pierrick; Fripp, Jurgen; Acosta, Oscar; Ourselin, Sebastien; Rowe, Christopher; Villemagne, Victor L.; Salvado, Olivier
2009-02-01
Despite the increasing use of 11C-PiB in research into Alzheimer's disease (AD), there are few standardized analysis procedures that have been reported or published. This is especially true with regards to partial volume effects (PVE) and partial volume correction. Due to the nature of PET physics and acquisition, PET images exhibit relatively low spatial resolution compared to other modalities, resulting in bias of quantitative results. Although previous studies have applied PVE correction techniques on 11C-PiB data, the results have not been quantitatively evaluated and compared against uncorrected data. The aim of this study is threefold. Firstly, a realistic synthetic phantom was created to quantify PVE. Secondly, MRI partial volume estimate segmentations were used to improve voxel-based PVE correction instead of using hard segmentations. Thirdly, quantification of PVE correction was evaluated on 34 subjects (AD=10, Normal Controls (NC)=24), including 12 PiB positive NC. Regional analysis was performed using the Anatomical Automatic Labeling (AAL) template, which was registered to each patient. Regions of interest were restricted to the gray matter (GM) defined by the MR segmentation. Average normalized intensity of the neocortex and selected regions were used to evaluate the discrimination power between AD and NC both with and without PVE correction. Receiver Operating Characteristic (ROC) curves were computed for the binary discrimination task. The phantom study revealed signal losses due to PVE between 10 to 40 % which were mostly recovered to within 5% after correction. Better classification was achieved after PVE correction, resulting in higher areas under ROC curves.
Navy Fuel Composition and Screening Tool (FCAST) v2.8
2016-05-10
allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
Gradient, contact-free volume transfers minimize compound loss in dose-response experiments.
Harris, David; Olechno, Joe; Datwani, Sammy; Ellson, Richard
2010-01-01
More accurate dose-response curves can be constructed by eliminating aqueous serial dilution of compounds. Traditional serial dilutions that use aqueous diluents can result in errors in dose-response values of up to 4 orders of magnitude for a significant percentage of a compound library. When DMSO is used as the diluent, the errors are reduced but not eliminated. The authors use acoustic drop ejection (ADE) to transfer different volumes of model library compounds, directly creating a concentration gradient series in the receiver assay plate. Sample losses and contamination associated with compound handling are therefore avoided or minimized, particularly in the case of less water-soluble compounds. ADE is particularly well suited for assay miniaturization, but gradient volume dispensing is not limited to miniaturized applications.
Reducing Interpolation Artifacts for Mutual Information Based Image Registration
Soleimani, H.; Khosravifard, M.A.
2011-01-01
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673
Hypercorrection of high confidence errors in lexical representations.
Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa
2013-08-01
Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He
2018-01-01
Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe; De Bernardi, Elisabetta
Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previousmore » analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was demonstrated. The inclusion of the spatial prior improved segmentation accuracy only for lesions surrounded by heterogeneous background: in the relevant simulation subset, the median VE significantly decreased from 13% to 7%. Results on clinical data were found in accordance with simulations, with absolute VE <7%, Dice >0.85, CE <0.30, and HD <0.81. Conclusions: The sole introduction of constraints based on background modeling outperformed standard GMM and the other tested algorithms. Insertion of a spatial prior improved the accuracy for realistic cases of objects in heterogeneous backgrounds. Moreover, robustness against initialization supports the applicability in a clinical setting. In conclusion, application-driven constraints can generally improve the capabilities of GMM and statistical clustering algorithms.« less
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
ERIC Educational Resources Information Center
Katch, Frank I.; Katch, Victor L.
1980-01-01
Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II
Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.
2009-01-01
We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Accuracy of six elastic impression materials used for complete-arch fixed partial dentures.
Stauffer, J P; Meyer, J M; Nally, J N
1976-04-01
1. The accuracy of four types of impression materials used to make a complete-arch fixed partial denture was evaluated by visual comparison and indirect measurement methods. 2. None of the tested materials allows safe finishing of a complete-arch fixed partial denture on a cast poured from one single master impression. 3. All of the tested materials can be used for impressions for a complete-arch fixed partial denture provided it is not finished on one single cast. Errors can be avoided by making a new impression with the fitted castings in place. Assembly and soldering should be done on the second cast. 4. In making the master fixed partial denture for this study, inaccurate soldering was a problem that was overcome with the use of epoxy glue. Hence, soldering seems to be a major source of inaccuracy for every fixed partial denture.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1985-01-01
After detailing the construction of spectral approximations to time-dependent mixed initial boundary value problems, a study is conducted of differential equations of the form 'partial derivative of u/partial derivative of t = Lu + f', where for each t, u(t) belongs to a Hilbert space such that u satisfies homogeneous boundary conditions. For the sake of simplicity, it is assumed that L is an unbounded, time-independent linear operator. Attention is given to Fourier methods of both Galerkin and pseudospectral method types, the Galerkin method, the pseudospectral Chebyshev and Legendre methods, the error equation, hyperbolic partial differentiation equations, and time discretization and iterative methods.
Densities of L-Glutamic Acid HCl Drug in Aqueous NaCl and KCl Solutions at Different Temperatures
NASA Astrophysics Data System (ADS)
Ryshetti, Suresh; Raghuram, Noothi; Rani, Emmadi Jayanthi; Tangeda, Savitha Jyostna
2016-04-01
Densities (ρ ) of (0.01 to 0.07) {mol}{\\cdot } {kg}^{-1} L-Glutamic acid HCl (L-HCl) drug in water, and in aqueous NaCl and KCl (0.5 and 1.0) {mol}{\\cdot } {kg}^{-1} solutions have been reported as a function of temperature at T = (298.15, 303.15, 308.15, and 313.15) K and atmospheric pressure. The accurate density (ρ ) values are used to estimate the various parameters such as the apparent molar volume (V_{2,{\\upphi }}), the partial molar volume (V2^{∞}), the isobaric thermal expansion coefficient (α 2), the partial molar expansion (E2^{∞}), and Hepler's constant (partial 2V2^{∞}/partial T2)P. The Cosphere overlap model is used to understand the solute-solvent interactions in a ternary mixture (L-HCl drug + NaCl or KCl + water). Hepler's constant (partial 2V2^{∞}/partial T2)_P is utilized to interpret the structure-making or -breaking ability of L-HCl drug in aqueous NaCl and KCl solutions, and the results are inferred that L-HCl drug acts as a structure maker, i.e., kosmotrope in aqueous NaCl solutions and performs as a structure breaker, i.e., chaotrope in aqueous KCl solutions.
Absorbance and fluorometric sensing with capillary wells microplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Han Yen; Cheong, Brandon Huey-Ping; Neild, Adrian
2010-12-15
Detection and readout from small volume assays in microplates are a challenge. The capillary wells microplate approach [Ng et al., Appl. Phys. Lett. 93, 174105 (2008)] offers strong advantages in small liquid volume management. An adapted design is described and shown here to be able to detect, in a nonimaging manner, fluorescence and absorbance assays minus the error often associated with meniscus forming at the air-liquid interface. The presence of bubbles in liquid samples residing in microplate wells can cause inaccuracies. Pipetting errors, if not adequately managed, can result in misleading data and wrong interpretations of assay results; particularly inmore » the context of high throughput screening. We show that the adapted design is also able to detect for bubbles and pipetting errors during actual assay runs to ensure accuracy in screening.« less
Forecasting of monsoon heavy rains: challenges in NWP
NASA Astrophysics Data System (ADS)
Sharma, Kuldeep; Ashrit, Raghavendra; Iyengar, Gopal; Bhatla, R.; Rajagopal, E. N.
2016-05-01
Last decade has seen a tremendous improvement in the forecasting skill of numerical weather prediction (NWP) models. This is attributed to increased sophistication in NWP models, which resolve complex physical processes, advanced data assimilation, increased grid resolution and satellite observations. However, prediction of heavy rains is still a challenge since the models exhibit large error in amounts as well as spatial and temporal distribution. Two state-of-art NWP models have been investigated over the Indian monsoon region to assess their ability in predicting the heavy rainfall events. The unified model operational at National Center for Medium Range Weather Forecasting (NCUM) and the unified model operational at the Australian Bureau of Meteorology (Australian Community Climate and Earth-System Simulator -- Global (ACCESS-G)) are used in this study. The recent (JJAS 2015) Indian monsoon season witnessed 6 depressions and 2 cyclonic storms which resulted in heavy rains and flooding. The CRA method of verification allows the decomposition of forecast errors in terms of error in the rainfall volume, pattern and location. The case by case study using CRA technique shows that contribution to the rainfall errors come from pattern and displacement is large while contribution due to error in predicted rainfall volume is least.
Testa, A C; Ferrandina, G; Moro, F; Pasciuto, T; Moruzzi, M C; De Blasis, I; Mascilini, F; Foti, E; Autorino, R; Collarino, A; Gui, B; Zannoni, G F; Gambacorta, M A; Valentini, A L; Rufini, V; Scambia, G
2018-05-01
Chemoradiation-based neoadjuvant treatment followed by radical surgery is an alternative therapeutic strategy for locally advanced cervical cancer (LACC), but ultrasound variables used to predict partial response to neoadjuvant treatment are not well defined. Our goal was to analyze prospectively the potential role of transvaginal ultrasound in early prediction of partial pathological response, assessed in terms of residual disease at histology, in a large, single-institution series of LACC patients triaged to neoadjuvant treatment followed by radical surgery. Between October 2010 and June 2014, we screened 108 women with histologically documented LACC Stage IB2-IVA, of whom 88 were included in the final analysis. Tumor volume, three-dimensional (3D) power Doppler indices and contrast parameters were obtained before (baseline examination) and after 2 weeks of treatment. The pathological response was defined as complete (absence of any residual tumor after treatment) or partial (microscopic and/or macroscopic residual tumor at pathological examination). Complete-response and partial-response groups were compared and receiver-operating characteristics (ROC) curves were generated for ultrasound variables that were statistically significant on univariate analysis to evaluate their diagnostic ability to predict partial pathological response. There was a complete pathological response to neoadjuvant therapy in 40 (45.5%) patients and a partial response in 48 (54.5%). At baseline examination, tumor volume did not differ between the two groups. However, after 2 weeks of neoadjuvant treatment, the tumor volume was significantly greater in patients with partial response than it was in those with complete response (P = 0.019). Among the 3D vascular indices, the vascularization index (VI) was significantly lower in the partial-response compared with the complete-response group, both before and after 2 weeks of treatment (P = 0.037 and P = 0.024, respectively). At baseline examination in the contrast analysis, women with partial response had lower tumor peak enhancement (PE) as well as lower tumor wash-in rate (WiR) and longer tumor rise time (RT) compared with complete responders (P = 0.006, P = 0.003, P = 0.038, respectively). There was no difference in terms of contrast parameters after 2 weeks of treatment. ROC-curve analysis of baseline parameters showed that the best cut-offs for predicting partial pathological response were 41.5% for VI (sensitivity, 63.6%; specificity, 66.7%); 16123.5 auxiliary units for tumor PE (sensitivity, 47.9%; specificity, 84.2%); 7.8 s for tumor RT (sensitivity, 68.8%; specificity, 57.9%); and 4902 for tumor WiR (sensitivity, 77.1%; specificity, 60.5%). ROC curves of parameters after 2 weeks of treatment showed that the best cut-off for predicting partial pathological response was 18.1 cm 3 for tumor volume (sensitivity, 70.8%; specificity 60.0%) and 39.5% for VI (sensitivity; 62.5%; specificity, 73.5%). Ultrasound and contrast parameters differ between LACC patients with complete response and those with partial response before and after 2 weeks of neoadjuvant treatment. However, neither ultrasound parameters before treatment nor those after 2 weeks of treatment had cut-off values with acceptable sensitivity and specificity for predicting partial pathological response to neoadjuvant therapy. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
1987-10-15
Guardiani, R. Strane, J. Profeta, Contraves Goerz Corporation, 610 Epsilon Dr., Pittsburg PA S04A "The Global Positioning System as an Aid to the Testing...errors. The weights defining the current error state as a linear combination of the gravity errors at the previous vehicle locations are maintained and...updated at each time step. These weights can also be used to compute the cross-correlation of the system errors with measured gravity quantities for use
Strauss, Rupert W; Muñoz, Beatriz; Wolfson, Yulia; Sophie, Raafay; Fletcher, Emily; Bittencourt, Millena G; Scholl, Hendrik P N
2016-01-01
Aims To estimate disease progression based on analysis of macular volume measured by spectral-domain optical coherence tomography (SD-OCT) in patients affected by Stargardt macular dystrophy (STGD1) and to evaluate the influence of software errors on these measurements. Methods 58 eyes of 29 STGD1 patients were included. Numbers and types of algorithm errors were recorded and manually corrected. In a subgroup of 36 eyes of 18 patients with at least two examinations over time, total macular volume (TMV) and volumes of all nine Early Treatment of Diabetic Retinopathy Study (ETDRS) subfields were obtained. Random effects models were used to estimate the rate of change per year for the population, and empirical Bayes slopes were used to estimate yearly decline in TMV for individual eyes. Results 6958 single B-scans from 190 macular cube scans were analysed. 2360 (33.9%) showed algorithm errors. Mean observation period for follow-up data was 15 months (range 3–40). The median (IQR) change in TMV using the empirical Bayes estimates for the individual eyes was −0.103 (−0.145, −0.059) mm3 per year. The mean (±SD) TMV was 6.321±1.000 mm3 at baseline, and rate of decline was −0.118 mm3 per year (p=0.003). Yearly mean volume change was −0.004 mm3 in the central subfield (mean baseline=0.128 mm3), −0.032 mm3 in the inner (mean baseline=1.484 mm3) and −0.079 mm3 in the outer ETDRS subfields (mean baseline=5.206 mm3). Conclusions SD-OCT measurements allow monitoring the decline in retinal volume in STGD1; however, they require significant manual correction of software errors. PMID:26568636
Clinical laboratory: bigger is not always better.
Plebani, Mario
2018-06-27
Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.
At the cross-roads: an on-road examination of driving errors at intersections.
Young, Kristie L; Salmon, Paul M; Lenné, Michael G
2013-09-01
A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Frequency-domain optical absorption spectroscopy of finite tissue volumes using diffusion theory.
Pogue, B W; Patterson, M S
1994-07-01
The goal of frequency-domain optical absorption spectroscopy is the non-invasive determination of the absorption coefficient of a specific tissue volume. Since this allows the concentration of endogenous and exogenous chromophores to be calculated, there is considerable potential for clinical application. The technique relies on the measurement of the phase and modulation of light, which is diffusely reflected or transmitted by the tissue when it is illuminated by an intensity-modulated source. A model of light propagation must then be used to deduce the absorption coefficient. For simplicity, it is usual to assume the tissue is either infinite in extent (for transmission measurements) or semi-infinite (for reflectance measurements). The goal of this paper is to examine the errors introduced by these assumptions when measurements are actually performed on finite volumes. Diffusion-theory calculations and experimental measurements were performed for slabs, cylinders and spheres with optical properties characteristic of soft tissues in the near infrared. The error in absorption coefficient is presented as a function of object size as a guideline to when the simple models may be used. For transmission measurements, the error is almost independent of the true absorption coefficient, which allows absolute changes in absorption to be measured accurately. The implications of these errors in absorption coefficient for two clinical problems--quantitation of an exogenous photosensitizer and measurement of haemoglobin oxygenation--are presented and discussed.
Oh-Oka, Hitoshi; Nose, Ryuichiro
2005-09-01
Using a portable three dimensional ultrasound scanning device (The Bladder Scan BVI6100, Diagnostic Ultrasound Corporation), we examined measured values of bladder volume, especially focusing on volume lower than 100 ml. A total of 100 patients (male: 66, female: 34) were enrolled in the study. We made a comparison study between the measured value (the average of three measurements of bladder urine volume after a trial in male and female modes) using BVI6100, and the actual measured value of the sample obtained by urethral catheterization in each patient. We examined the factors which could increase the error rate. We also introduced the effective techniques to reduce measurement errors. The actual measured values in all patients correlated well with the average value of three measurements after a trial in a male mode of the BVI6100. The correlation coefficient was 0.887, the error rate was--4.6 +/- 24.5%, and the average coefficient of variation was 15.2. It was observed that the measurement result using the BVI6100 is influenced by patient side factors (extracted edges between bladder wall and urine, thickened bladder wall, irregular bladder wall, flattened rate of bladder, mistaking prostate for bladder in male, mistaking bladder for uterus in a female mode, etc.) or examiner side factors (angle between BVI and abdominal wall, compatibility between abdominal wall and ultrasound probe, controlling deflection while using probe, etc). When appropriate patients are chosen and proper measurement is performed, BVI6100 provides significantly higher accuracy in determining bladder volume, compared with existing abdominal ultrasound methods. BVI6100 is a convenient and extremely effective device also for the measurement of bladder urine over 100 ml.
Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U
2018-06-01
Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; Kiess, A
2015-06-15
Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
Can partial coherence interferometry be used to determine retinal shape?
Atchison, David A; Charman, W Neil
2011-05-01
To determine likely errors in estimating retinal shape using partial coherence interferometric instruments when no allowance is made for optical distortion. Errors were estimated using Gullstrand no. 1 schematic eye and variants which included a 10 diopter (D) axial myopic eye, an emmetropic eye with a gradient-index lens, and a 10.9 D accommodating eye with a gradient-index lens. Performance was simulated for two commercial instruments, the IOLMaster (Carl Zeiss Meditec) and the Lenstar LS 900 (Haag-Streit AG). The incident beam was directed toward either the center of curvature of the anterior cornea (corneal-direction method) or the center of the entrance pupil (pupil-direction method). Simple trigonometry was used with the corneal intercept and the incident beam angle to estimate retinal contour. Conics were fitted to the estimated contours. The pupil-direction method gave estimates of retinal contour that were much too flat. The cornea-direction method gave similar results for IOLMaster and Lenstar approaches. The steepness of the retinal contour was slightly overestimated, the exact effects varying with the refractive error, gradient index, and accommodation. These theoretical results suggest that, for field angles ≤30°, partial coherence interferometric instruments are of use in estimating retinal shape by the corneal-direction method with the assumptions of a regular retinal shape and no optical distortion. It may be possible to improve on these estimates out to larger field angles by using optical modeling to correct for distortion.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Phillip, Veit; Zahel, Tina; Danninger, Assiye; Erkan, Mert; Dobritz, Martin; Steiner, Jörg M; Kleeff, Jörg; Schmid, Roland M; Algül, Hana
2015-01-01
Regeneration of the pancreas has been well characterized in animal models. However, there are conflicting data on the regenerative capacity of the human pancreas. The aim of the present study was to assess the regenerative capacity of the human pancreas. In a retrospective study, data from patients undergoing left partial pancreatic resection at a single center were eligible for inclusion (n = 185). Volumetry was performed based on 5 mm CT-scans acquired through a 256-slice CT-scanner using a semi-automated software. Data from 24 patients (15 males/9 females) were included. Mean ± SD age was 68 ± 11 years (range, 40-85 years). Median time between surgery and the 1st postoperative CT was 9 days (range, 0-27 days; IQR, 7-13), 55 days (range, 21-141 days; IQR, 34-105) until the 2nd CT, and 191 days (range, 62-1902; IQR, 156-347) until the 3rd CT. The pancreatic volumes differed significantly between the first and the second postoperative CT scans (median volume 25.6 mL and 30.6 mL, respectively; p = 0.008) and had significantly increased further by the 3rd CT scan (median volume 37.9 mL; p = 0.001 for comparison with 1st CT scan and p = 0.003 for comparison with 2nd CT scan). The human pancreas shows a measurable and considerable potential of volumetric gain after partial resection. Multidetector-CT based semi-automated volume analysis is a feasible method for follow-up of the volume of the remaining pancreatic parenchyma after partial pancreatectomy. Effects on exocrine and endocrine pancreatic function have to be evaluated in a prospective manner. Copyright © 2015 IAP and EPC. Published by Elsevier B.V. All rights reserved.
Kaufmann, Lisa-Katrin; Baur, Volker; Hänggi, Jürgen; Jäncke, Lutz; Piccirelli, Marco; Kollias, Spyros; Schnyder, Ulrich; Pasternak, Ofer; Martin-Soelch, Chantal; Milos, Gabriella
2017-07-01
Acute anorexia nervosa (AN) is characterized by reduced brain mass and corresponding increased sulcal and ventricular cerebrospinal fluid. Recent studies of white matter using diffusion tensor imaging consistently identified alterations in the fornix, such as reduced fractional anisotropy (FA). However, because the fornix penetrates the ventricles, it is prone to cerebrospinal fluid-induced partial volume effects that interfere with a valid assessment of FA. We investigated the hypothesis that in the acute stage of AN, FA of the fornix is markedly affected by ventricular volumes. First, using diffusion tensor imaging data we established the inverse associations between forniceal FA and volumes of the third and lateral ventricles in a prestudy with 32 healthy subjects to demonstrate the strength of ventricular influence on forniceal FA independent of AN. Second, we investigated a sample of 25 acute AN patients and 25 healthy control subjects. Using ventricular volumes as covariates markedly reduced the group effect of forniceal FA, even with tract-based spatial statistics focusing only on the center of the fornix. In addition, after correcting for free water on voxel level, the group differences in forniceal FA between AN patients and controls disappeared completely. It is unlikely that microstructural changes affecting FA occurred in the fornix of AN patients. Previously identified alterations in acute AN may have been biased by partial volume effects and the proposed central role of this structure in the pathophysiology may need to be reconsidered. Future studies on white matter alterations in AN should carefully deal with partial volume effects. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Anatomic comparison of traditional and enucleation partial nephrectomy specimens.
Calaway, Adam C; Gondim, Dibson D; Flack, Chandra K; Jacob, Joseph M; Idrees, Muhammad T; Boris, Ronald S
2017-05-01
To compare pseudocapsule (PC) properties of clear cell renal cell carcinoma tumors removed via both traditional partial nephrectomy (PNx) and enucleative techniques as well as quantify the difference in volume of normal renal parenchyma removed between groups. A retrospective review of clear cell PNx specimens between 2011 and 2014 was performed. All patients undergoing tumor enucleation (TE) were included. A single pathologist reviewed the pathological specimens. This cohort was compared with a previously collected clear cell traditional PNx database. A total of 47 clear cell partial nephrectomies were reviewed (34 PNx and 13 TE). Invasion of tumor completely through the PC and positive surgical margins were seen in 2 (5.8%) and 1 (7.7%) of traditional and TE specimens, respectively (P = 0.82). PC mean (0.63 vs. 0.52mm), maximum (1.39 vs. 1.65mm), and minimum thickness (0.27 vs. 0.19mm) were similar between cohorts (P = 0.29, P = 0.36, and P = 0.44). Gross specimen volume varied considerably between the 2 groups (35.6 vs. 17.9cm 3 , P≤0.05) although tumor volume did not (12 vs. 14.2cm 3 , P = 0.64). The renal tumor consisted of only 37% of the total volume of the traditional PNx specimens compared to 80% of the volume in TEs (P<0.01). Four TE specimens (31%) were "true" TEs (no additional parenchyma identified outside of the PC). PC properties appear independent of surgical technique. True TEs are uncommon. Regardless, there is considerable volume discrepancy of normal renal parenchymal removed between enucleative and nonenucleative PNx groups. Copyright © 2017 Elsevier Inc. All rights reserved.
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
NASA Astrophysics Data System (ADS)
2013-01-01
Due to a production error, the article 'Corrigendum: Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard' by Abhinav K Jha, Matthew A Kupinski, Jeffrey J Rodriguez, Renu M Stephen and Alison T Stopeck was duplicated and the article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters was omitted in the print version of Physics in Medicine & Biology, volume 58, issue 1. The online versions of both articles are not affected. The article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters will be included in the print version of this issue (Physics in Medicine & Biology, volume 58, issue 2.) We apologise unreservedly for this error. Jon Ruffle Publisher
Correlations of π N partial waves for multireaction analyses
Doring, M.; Revier, J.; Ronchen, D.; ...
2016-06-15
In the search for missing baryonic resonances, many analyses include data from a variety of pion- and photon-induced reactions. For elastic πN scattering, however, usually the partial waves of the SAID (Scattering Analysis Interactive Database) or other groups are fitted, instead of data. We provide the partial-wave covariance matrices needed to perform correlated χ 2 fits, in which the obtained χ 2 equals the actual χ 2 up to nonlinear and normalization corrections. For any analysis relying on partial waves extracted from elastic pion scattering, this is a prerequisite to assess the significance of resonance signals and to assign anymore » uncertainty on results. Lastly, the influence of systematic errors is also considered.« less
Li, Xia; Dawant, Benoit M.; Welch, E. Brian; Chakravarthy, A. Bapsi; Xu, Lei; Mayer, Ingrid; Kelley, Mark; Meszoely, Ingrid; Means-Powell, Julie; Gore, John C.; Yankeelov, Thomas E.
2010-01-01
Purpose: The authors present a method to validate coregistration of breast magnetic resonance images obtained at multiple time points during the course of treatment. In performing sequential registration of breast images, the effects of patient repositioning, as well as possible changes in tumor shape and volume, must be considered. The authors accomplish this by extending the adaptive bases algorithm (ABA) to include a tumor-volume preserving constraint in the cost function. In this study, the authors evaluate this approach using a novel validation method that simulates not only the bulk deformation associated with breast MR images obtained at different time points, but also the reduction in tumor volume typically observed as a response to neoadjuvant chemotherapy. Methods: For each of the six patients, high-resolution 3D contrast enhanced T1-weighted images were obtained before treatment, after one cycle of chemotherapy and at the conclusion of chemotherapy. To evaluate the effects of decreasing tumor size during the course of therapy, simulations were run in which the tumor in the original images was contracted by 25%, 50%, 75%, and 95%, respectively. The contracted area was then filled using texture from local healthy appearing tissue. Next, to simulate the post-treatment data, the simulated (i.e., contracted tumor) images were coregistered to the experimentally measured post-treatment images using a surface registration. By comparing the deformations generated by the constrained and unconstrained version of ABA, the authors assessed the accuracy of the registration algorithms. The authors also applied the two algorithms on experimental data to study the tumor volume changes, the value of the constraint, and the smoothness of transformations. Results: For the six patient data sets, the average voxel shift error (mean±standard deviation) for the ABA with constraint was 0.45±0.37, 0.97±0.83, 1.43±0.96, and 1.80±1.17 mm for the 25%, 50%, 75%, and 95% contraction simulations, respectively. In comparison, the average voxel shift error for the unconstrained ABA was 0.46±0.29, 1.13±1.17, 2.40±2.04, and 3.53±2.89 mm, respectively. These voxel shift errors translate into compression of the tumor volume: The ABA with constraint returned volumetric errors of 2.70±4.08%, 7.31±4.52%, 9.28±5.55%, and 13.19±6.73% for the 25%, 50%, 75%, and 95% contraction simulations, respectively, whereas the unconstrained ABA returned volumetric errors of 4.00±4.46%, 9.93±4.83%, 19.78±5.657%, and 29.75±15.18%. The ABA with constraint yields a smaller mean shift error, as well as a smaller volume error (p=0.031 25 for the 75% and 95% contractions), than the unconstrained ABA for the simulated sets. Visual and quantitative assessments on experimental data also indicate a good performance of the proposed algorithm. Conclusions: The ABA with constraint can successfully register breast MR images acquired at different time points with reasonable error. To the best of the authors’ knowledge, this is the first report of an attempt to quantitatively assess in both phantoms and a set of patients the accuracy of a registration algorithm for this purpose. PMID:20632566
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Matias-Guiu, Pau; Rodríguez-Bencomo, Juan José; Pérez-Correa, José R; López, Francisco
2018-04-15
Developing new distillation strategies can help the spirits industry to improve quality, safety and process efficiency. Batch stills equipped with a packed column and an internal partial condenser are an innovative experimental system, allowing a fast and flexible management of the rectification. In this study, the impact of four factors (heart-cut volume, head-cut volume, pH and cooling flow rate of the internal partial condenser during the head-cut fraction) on 18 major volatile compounds of Muscat spirits was optimized using response surface methodology and desirability function approaches. Results have shown that high rectification at the beginning of the heart-cut enhances the overall positive aroma compounds of the product, reducing off-flavor compounds. In contrast, optimum levels of heart-cut volume, head-cut volume and pH factors varied depending on the process goal. Finally, three optimal operational conditions (head off-flavors reduction, flowery terpenic enhancement and fruity ester enhancement) were evaluated by chemical and sensory analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ning, Nathan S P; Watkins, Susanne C; Gawne, Ben; Nielsen, Daryl L
2012-09-30
Water sharing to meet both agricultural and environmental demands is a critical issue affecting the health of many floodplain river systems around the world. This study explored the potential for using wetlands as temporary off-river storages to conjunctively maintain ecological values and support agricultural demands by assessing the effects of artificial drawdown on wetland aquatic plant communities. An initial experiment was undertaken in outdoor mesocosms in which four different treatments were compared over a 131 day duration: (1) natural drawdown where the water was left to drawdown naturally via evaporation; (2) partial drawdown where approximately half of the volume of water was pumped out after 42 days; (3) stepped drawdown where approximately half of the volume of water pumped out after 42 days, and then the remaining volume of water was pumped out after 117 days; and (4) total drawdown where all of the of water was pumped out after 117 days. A complementary field study was subsequently undertaken where two wetlands were left to drawdown naturally and two were partially drawn down artificially (i.e. had approximately half of their volume removed by pumping). Results from both of these studies indicated that neither aquatic plant abundance nor taxon richness were adversely affected by partial drawdown. Rather, both studies showed that aquatic plant communities subjected to a partial drawdown treatment became more species rich and diverse than communities subjected to a natural drawdown treatment. This suggests that it may be possible to use wetlands as intermediary storages for the dual purposes of maintaining ecological values and supporting agricultural demands. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Inoue, Kentaro; Ito, Hiroshi; Goto, Ryoi; Nakagawa, Manabu; Kinomura, Shigeo; Sato, Tachio; Sato, Kazunori; Fukuda, Hiroshi
2005-06-01
Several studies using single photon emission tomography (SPECT) have shown changes in cerebral blood flow (CBF) with age, which were associated with partial volume effects by some authors. Some studies have also demonstrated gender-related differences in CBF. The present study aimed to examine age and gender effects on CBF SPECT images obtained using the 99mTc-ethyl cysteinate dimer and a SPECT scanner, before and after partial volume correction (PVC) using magnetic resonance (MR) imaging. Forty-four healthy subjects (29 males and 15 females; age range, 27-64 y; mean age, 50.0 +/- 9.8 y) participated. Each MR image was segmented to yield grey and white matter images and coregistered to a corresponding SPECT image, followed by convolution to approximate the SPECT spatial resolution. PVC-SPECT images were produced using the convoluted grey matter MR (GM-MR) and white matter MR images. The age and gender effects were assessed using SPM99. Decreases with age were detected in the anterolateral prefrontal cortex and in areas along the lateral sulcus and the lateral ventricle, bilaterally, in the GM-MR images and the SPECT images. In the PVC-SPECT images, decreases in CBF in the lateral prefrontal cortex lost their statistical significance. Decreases in CBF with age found along the lateral sulcus and the lateral ventricle, on the other hand, remained statistically significant, but observation of the spatially normalized MR images suggests that these findings are associated with the dilatation of the lateral sulcus and lateral ventricle, which was not completely compensated for by the spatial normalization procedure. Our present study demonstrated that age effects on CBF in healthy subjects could reflect morphological differences with age in grey matter.
Accurate Measurement of Small Airways on Low-Dose Thoracic CT Scans in Smokers
Conradi, Susan H.; Atkinson, Jeffrey J.; Zheng, Jie; Schechtman, Kenneth B.; Senior, Robert M.; Gierada, David S.
2013-01-01
Background: Partial volume averaging and tilt relative to the scan plane on transverse images limit the accuracy of airway wall thickness measurements on CT scan, confounding assessment of the relationship between airway remodeling and clinical status in COPD. The purpose of this study was to assess the effect of partial volume averaging and tilt corrections on airway wall thickness measurement accuracy and on relationships between airway wall thickening and clinical status in COPD. Methods: Airway wall thickness measurements in 80 heavy smokers were obtained on transverse images from low-dose CT scan using the open-source program Airway Inspector. Measurements were corrected for partial volume averaging and tilt effects using an attenuation- and geometry-based algorithm and compared with functional status. Results: The algorithm reduced wall thickness measurements of smaller airways to a greater degree than larger airways, increasing the overall range. When restricted to analyses of airways with an inner diameter < 3.0 mm, for a theoretical airway of 2.0 mm inner diameter, the wall thickness decreased from 1.07 ± 0.07 to 0.29 ± 0.10 mm, and the square root of the wall area decreased from 3.34 ± 0.15 to 1.58 ± 0.29 mm, comparable to histologic measurement studies. Corrected measurements had higher correlation with FEV1, differed more between BMI, airflow obstruction, dyspnea, and exercise capacity (BODE) index scores, and explained a greater proportion of FEV1 variability in multivariate models. Conclusions: Correcting for partial volume averaging improves accuracy of airway wall thickness estimation, allowing direct measurement of the small airways to better define their role in COPD. PMID:23172175
Su, Yi; Blazey, Tyler M; Owen, Christopher J; Christensen, Jon J; Friedrichsen, Karl; Joseph-Mathurin, Nelly; Wang, Qing; Hornbeck, Russ C; Ances, Beau M; Snyder, Abraham Z; Cash, Lisa A; Koeppe, Robert A; Klunk, William E; Galasko, Douglas; Brickman, Adam M; McDade, Eric; Ringman, John M; Thompson, Paul M; Saykin, Andrew J; Ghetti, Bernardino; Sperling, Reisa A; Johnson, Keith A; Salloway, Stephen P; Schofield, Peter R; Masters, Colin L; Villemagne, Victor L; Fox, Nick C; Förster, Stefan; Chen, Kewei; Reiman, Eric M; Xiong, Chengjie; Marcus, Daniel S; Weiner, Michael W; Morris, John C; Bateman, Randall J; Benzinger, Tammie L S
2016-01-01
Amyloid imaging plays an important role in the research and diagnosis of dementing disorders. Substantial variation in quantitative methods to measure brain amyloid burden exists in the field. The aim of this work is to investigate the impact of methodological variations to the quantification of amyloid burden using data from the Dominantly Inherited Alzheimer's Network (DIAN), an autosomal dominant Alzheimer's disease population. Cross-sectional and longitudinal [11C]-Pittsburgh Compound B (PiB) PET imaging data from the DIAN study were analyzed. Four candidate reference regions were investigated for estimation of brain amyloid burden. A regional spread function based technique was also investigated for the correction of partial volume effects. Cerebellar cortex, brain-stem, and white matter regions all had stable tracer retention during the course of disease. Partial volume correction consistently improves sensitivity to group differences and longitudinal changes over time. White matter referencing improved statistical power in the detecting longitudinal changes in relative tracer retention; however, the reason for this improvement is unclear and requires further investigation. Full dynamic acquisition and kinetic modeling improved statistical power although it may add cost and time. Several technical variations to amyloid burden quantification were examined in this study. Partial volume correction emerged as the strategy that most consistently improved statistical power for the detection of both longitudinal changes and across-group differences. For the autosomal dominant Alzheimer's disease population with PiB imaging, utilizing brainstem as a reference region with partial volume correction may be optimal for current interventional trials. Further investigation of technical issues in quantitative amyloid imaging in different study populations using different amyloid imaging tracers is warranted.
Su, Yi; Blazey, Tyler M.; Owen, Christopher J.; Christensen, Jon J.; Friedrichsen, Karl; Joseph-Mathurin, Nelly; Wang, Qing; Hornbeck, Russ C.; Ances, Beau M.; Snyder, Abraham Z.; Cash, Lisa A.; Koeppe, Robert A.; Klunk, William E.; Galasko, Douglas; Brickman, Adam M.; McDade, Eric; Ringman, John M.; Thompson, Paul M.; Saykin, Andrew J.; Ghetti, Bernardino; Sperling, Reisa A.; Johnson, Keith A.; Salloway, Stephen P.; Schofield, Peter R.; Masters, Colin L.; Villemagne, Victor L.; Fox, Nick C.; Förster, Stefan; Chen, Kewei; Reiman, Eric M.; Xiong, Chengjie; Marcus, Daniel S.; Weiner, Michael W.; Morris, John C.; Bateman, Randall J.; Benzinger, Tammie L. S.
2016-01-01
Amyloid imaging plays an important role in the research and diagnosis of dementing disorders. Substantial variation in quantitative methods to measure brain amyloid burden exists in the field. The aim of this work is to investigate the impact of methodological variations to the quantification of amyloid burden using data from the Dominantly Inherited Alzheimer’s Network (DIAN), an autosomal dominant Alzheimer’s disease population. Cross-sectional and longitudinal [11C]-Pittsburgh Compound B (PiB) PET imaging data from the DIAN study were analyzed. Four candidate reference regions were investigated for estimation of brain amyloid burden. A regional spread function based technique was also investigated for the correction of partial volume effects. Cerebellar cortex, brain-stem, and white matter regions all had stable tracer retention during the course of disease. Partial volume correction consistently improves sensitivity to group differences and longitudinal changes over time. White matter referencing improved statistical power in the detecting longitudinal changes in relative tracer retention; however, the reason for this improvement is unclear and requires further investigation. Full dynamic acquisition and kinetic modeling improved statistical power although it may add cost and time. Several technical variations to amyloid burden quantification were examined in this study. Partial volume correction emerged as the strategy that most consistently improved statistical power for the detection of both longitudinal changes and across-group differences. For the autosomal dominant Alzheimer’s disease population with PiB imaging, utilizing brainstem as a reference region with partial volume correction may be optimal for current interventional trials. Further investigation of technical issues in quantitative amyloid imaging in different study populations using different amyloid imaging tracers is warranted. PMID:27010959
Dunster, Kimble R; Davies, Mark W; Fraser, John F
2007-01-01
Background Perfluorocarbon (PFC) vapour in the expired gases during partial liquid ventilation should be prevented from entering the atmosphere and recovered for potential reuse. This study aimed to determine how much PFC liquid could be recovered using a conventional humidified neonatal ventilator with chilled condensers in place of the usual expiratory ventilator circuit and whether PFC liquid could be recovered when using the chilled condensers at the ventilator exhaust outlet. Methods Using a model lung, perfluorocarbon vapour loss during humidified partial liquid ventilation of a 3.5 kg infant was approximated. For each test 30 mL of FC-77 was infused into the model lung. Condensers were placed in the expiratory limb of the ventilator circuit and the amounts of PFC (FC-77) and water recovered were measured five times. This was repeated with the condensers placed at the ventilator exhaust outlet. Results When the condensers were used as the expiratory limb, the mean (± SD) volume of FC77 recovered was 16.4 mL (± 0.18 mL). When the condensers were connected to the ventilator exhaust outlet the mean (± SD) volume of FC-77 recovered was 7.6 mL (± 1.14 mL). The volume of FC-77 recovered was significantly higher when the condenser was used as an expiratory limb. Conclusion Using two series connected condensers in the ventilator expiratory line 55% of PFC liquid (FC-77) can be recovered during partial liquid ventilation without altering the function of the of the ventilator circuit. This volume of PFC recovered was just over twice that recovered with the condensers connected to the ventilator exhaust outlet. PMID:17537270
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
TED: A Tolerant Edit Distance for segmentation evaluation.
Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew
2017-02-15
In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.
Khandwala, Yash S; Jeong, In Gab; Kim, Jae Heon; Han, Deok Hyun; Li, Shufeng; Wang, Ye; Chang, Steven L; Chung, Benjamin I
2017-09-01
Little is known about the impact of surgeon volume on the success of the robot-assisted partial nephrectomy (RAPN). The objective of this study was to compare the perioperative outcomes and cost related to RAPN by annual surgeon volumes. Using the Premier Hospital Database, we retrospectively analyzed 39,773 patients who underwent RAPN between 2003 and 2015 in the United States. Surgeons for each index case were grouped into quintiles for each respective year. Outcomes were 90-day postoperative complications, operating room time (ORT), blood transfusion, length of stay, and direct hospital costs. Logistic regression and generalized linear models were used to identify factors predicting complications and cost. After accounting for patient and hospital demographics, high- and very high-volume surgeons had 40% and 42% decreased odds of having major complications (p = 0.045 and p = 0.027, respectively). Surgeons with higher volumes were associated with fewer odds of prolonged ORT (0.68 for low, 0.72 for intermediate, 0.56 for high, 0.44 for very high volume, all p < 0.05) and length of hospital stay (0.67 for intermediate, 0.51 for high, 0.45 for very high volume, all p < 0.01) compared with very low-volume surgeons. The 90-day hospital cost was also significantly lower for the surgeons with higher volume, but the statistical significance diminished after consideration of hospital clustering. Surgeons with very high RAPN volumes were found to have superior perioperative outcomes. Although cost of care appeared to correlate with surgeon volume, there may be other more influential factors predicting cost.
Water Resources Data, New Jersey, Water Year 2002, Volume 1. Surface-Water Data
Reed, T.J.; White, B.T.; Centinaro, G.L.; Dudek, J.F.; Spehar, A.B.; Protz, A.R.; Shvanda, J.C.; Watson, A.F.; Holzer, G.K.
2003-01-01
Water-resources data for the 2002 Water Year for New Jersey are presented in three volumes, and consists of records of stage, discharge, and water quality of streams; stage and contents of lakes and reservoirs; and water levels and water quality of ground water. Volume 1 contains discharge records for 93 gaging stations; tide summaries at 31 gaging stations; and stage and contents at 39 lakes and reservoirs. Also included are stage and discharge for 104 crest-stage partial-record stations and stage-only at 31 tidal crest-stage gages. Locations of these sites are shown in figures 8-11. Additional water data were collected at various sites that are not part of the systematic data-collection program. Discharge measurements were made at 201 low-flow partial-record stations and 121 miscellaneous sites.
Partial molar volume of anionic polyelectrolytes in aqueous solution.
Salamanca, Constain; Contreras, Martín; Gamboa, Consuelo
2007-05-15
In this work the partial molar volumes (V) of different anionic polyelectrolytes and hydrophobically modified polyelectrolytes (PHM) were measured. Polymers like polymaleic acid-co-styrene, polymaleic acid-co-1-olefin, polymaleic acid-co-vinyl-2-pyrrolidone, and polyacrylic acid (abbreviated as MAS-n, PA-n-K2, AMVP, and PAA, respectively) were employed. These materials were investigated by density measurements in highly dilute aqueous solutions. The molar volume results allow us to discuss the effect of the carboxylic groups and the contributions from the comonomeric principal chain. The PAA presents the smaller V, while the largest V value was for AMVP. The V of PHM shows a linear relationship with the number of methylene groups in the lateral chain. It is found that the magnitude of the contribution per methylene group decreases as the hydrophobic character of the environment increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M; Yu, N; Joshi, N
Purpose: To dosimetrically evaluate the importance of timely reviewing daily CBCTs for patients with head and neck cancer. Methods: After each fraction daily cone-beam CT (CBCT) for head and neck patients are reviewed by physicians prior to next treatment. Physician rejected image registrations of CBCT were identified and analyzed for 17 patients. These CBCT images were rigidly fused with planning CT images and the contours from the planning CT were transferred to CBCTs. Because of limited extension in the superior-inferior dimension contours with partial volumes in CBCTs were discarded. The treatment isocenter was placed by applying the clinically recorded shiftsmore » to the volume isocenter of the CBCT. Dose was recalculated at the shifted isocenter using a homogeneous dose calculation algorithm. Dosimetrically relevant changes defined as greater than 5% deviation from the clinically accepted plans but with homogeneous dose calculation were evaluated for the high dose (HD), intermediate dose (ID), and low dose (LD) CTVs, spinal cord, larynx, oropharynx, parotids, and submandibular glands. Results: Among seventeen rejected CBCTS, HD-CTVs, ID-CTVs, and LD-CTVs were completely included in the CBCTs for 17, 1, and 15 patients, respectively. The prescription doses to the HD-CTV, ID-CTV, and LD-CTV were received by < 95% of the CTV volumes in 5/17, 1/1, and 5/15 patients respectively. For the spinal cord, the maximum doses (D0.03cc) were increased > 5% in 13 of 17 patients. For the oropharynx, larynx, parotid, and submandibular glands, the mean dose of these organs at risk was increased > 5% in 7/17, 8/12, 11/16 and 6/16 patients, respectively. Conclusion: Timely review daily CBCTs for head and neck patients under daily CBCT guidance is important, and uncorrected setup errors can translate to dosimetrically relevant dose increases in organsat- risk and dose decreases in the clinical target volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Ahunbay, E; Li, X
Purpose: With introduction of high-quality treatment imaging during radiation therapy (RT) delivery, e.g., MR-Linac, adaptive replanning of either online or offline becomes appealing. Dose accumulation of delivered fractions, a prerequisite for the adaptive replanning, can be cumbersome and inaccurate. The purpose of this work is to develop an automated process to accumulate daily doses and to assess the dose accumulation accuracy voxel-by-voxel for adaptive replanning. Methods: The process includes the following main steps: 1) reconstructing daily dose for each delivered fraction with a treatment planning system (Monaco, Elekta) based on the daily images using machine delivery log file and consideringmore » patient repositioning if applicable, 2) overlaying the daily dose to the planning image based on deformable image registering (DIR) (ADMIRE, Elekta), 3) assessing voxel dose deformation accuracy based on deformation field using predetermined criteria, and 4) outputting accumulated dose and dose-accuracy volume histograms and parameters. Daily CTs acquired using a CT-on-rails during routine CT-guided RT for sample patients with head and neck and prostate cancers were used to test the process. Results: Daily and accumulated doses (dose-volume histograms, etc) along with their accuracies (dose-accuracy volume histogram) can be robustly generated using the proposed process. The test data for a head and neck cancer case shows that the gross tumor volume decreased by 20% towards the end of treatment course, and the parotid gland mean dose increased by 10%. Such information would trigger adaptive replanning for the subsequent fractions. The voxel-based accuracy in the accumulated dose showed that errors in accumulated dose near rigid structures were small. Conclusion: A procedure as well as necessary tools to automatically accumulate daily dose and assess dose accumulation accuracy is developed and is useful for adaptive replanning. Partially supported by Elekta, Inc.« less
Evaluation of 4D-CT lung registration.
Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W
2009-01-01
Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.
ERIC Educational Resources Information Center
Jones, Katherine J.; Cochran, Gary; Hicks, Rodney W.; Mueller, Keith J.
2004-01-01
Context:Low service volume, insufficient information technology, and limited human resources are barriers to learning about and correcting system failures in small rural hospitals. This paper describes the implementation of and initial findings from a voluntary medication error reporting program developed by the Nebraska Center for Rural Health…
The Effects of Observation Errors on the Attack Vulnerability of Complex Networks
2012-11-01
more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra
Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.
2016-01-01
Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A
2016-05-01
Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose-volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao
2018-01-01
Surface parameters are the properties to describe the shape characters of aspheric surface, which mainly include vertex radius of curvature (VROC) and conic constant (CC). The VROC affects the basic properties, such as focal length of an aspheric surface, while the CC is the basis of classification for aspheric surface. The deviations of the two parameters are defined as surface parameter error (SPE). Precisely measuring SPE is critical for manufacturing and aligning aspheric surface. Generally, SPE of aspheric surface is measured directly by curvature fitting on the absolute profile measurement data from contact or non-contact testing. And most interferometry-based methods adopt null compensators or null computer-generated holograms to measure SPE. To our knowledge, there is no effective way to measure SPE of highorder aspheric surface with non-null interferometry. In this paper, based on the theory of slope asphericity and the best compensation distance (BCD) established in our previous work, we propose a SPE measurement method for high-order aspheric surface in partial compensation interferometry (PCI) system. In the procedure, firstly, we establish the system of two element equations by utilizing the SPE-caused BCD change and surface shape change. Then, we can simultaneously obtain the VROC error and CC error in PCI system by solving the equations. Simulations are made to verify the method, and the results show a high relative accuracy.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
Relative Loading on Biplane Wings
1933-01-01
1.00, for which F.=0.675 from figure 6.3gi partially to eperimental errors and partially to the The ratios are then multiplied by obI as required by...plane designers . The definitions have been based on show no change in the value of K,. Figure 13 indicates geometrical angles, which may be mnisleadimg...wrows Axis Moment about ams Angle Velocities Force - = 3 s oiie Dsin. =- Lnear - Designation symbol Designation So" Poitv t=on (cIttgm ngla
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
Acker, James G.; Byrne, R.H.
1989-01-01
Uses several realistic partial molar volume changes (??V) for aragonite dissolution in seawater. Indicates that the molar volume change for aragonite dissolution is within the bounds -37 cm 3/mole ?????V ??? -39.5 cm3/mole. -from Authors
Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing
2015-04-26
To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.
Trempler, Ima; Binder, Ellen; El-Sourani, Nadiya; Schiffler, Patrick; Tenberge, Jan-Gerd; Schiffer, Anne-Marike; Fink, Gereon R; Schubotz, Ricarda I
2018-06-01
Parkinson's disease (PD), which is caused by degeneration of dopaminergic neurons in the midbrain, results in a heterogeneous clinical picture including cognitive decline. Since the phasic signal of dopamine neurons is proposed to guide learning by signifying mismatches between subjects' expectations and external events, we here investigated whether akinetic-rigid PD patients without mild cognitive impairment exhibit difficulties in dealing with either relevant (requiring flexibility) or irrelevant (requiring stability) prediction errors. Following our previous study on flexibility and stability in prediction (Trempler et al. J Cogn Neurosci 29(2):298-309, 2017), we then assessed whether deficits would correspond with specific structural alterations in dopaminergic regions as well as in inferior frontal cortex, medial prefrontal cortex, and the hippocampus. Twenty-one healthy controls and twenty-one akinetic-rigid PD patients on and off medication performed a task which required to serially predict upcoming items. Switches between predictable sequences had to be indicated via button press, whereas sequence omissions had to be ignored. Independent of the disease, midbrain volume was related to a general response bias to unexpected events, whereas right putamen volume correlated with the ability to discriminate between relevant and irrelevant prediction errors. However, patients compared with healthy participants showed deficits in stabilisation against irrelevant prediction errors, associated with thickness of right inferior frontal gyrus and left medial prefrontal cortex. Flexible updating due to relevant prediction errors was also affected in patients compared with controls and associated with right hippocampus volume. Dopaminergic medication influenced behavioural performance across, but not within the patients. Our exploratory study warrants further research on deficient prediction error processing and its structural correlates as a core of cognitive symptoms occurring already in early stages of the disease.
Gopal, S; Do, T; Pooni, J S; Martinelli, G
2014-03-01
The Mostcare monitor is a non-invasive cardiac output monitor. It has been well validated in cardiac surgical patients but there is limited evidence on its use in patients with severe sepsis and septic shock. The study included the first 22 consecutive patients with severe sepsis and septic shock in whom the floatation of a pulmonary artery catheter was deemed necessary to guide clinical management. Cardiac output measurements including cardiac output, cardiac index and stroke volume were simultaneously calculated and recorded from a thermodilution pulmonary artery catheter and from the Mostcare monitor respectively. The two methods of measuring cardiac output were compared by Bland-Altman statistics and linear regression analysis. A percentage error of less than 30% was defined as acceptable for this study. Bland-Altman analysis for cardiac output showed a Bias of 0.31 L.min-1, precision (=SD) of 1.97 L.min-1 and a percentage error of 62.54%. For Cardiac Index the bias was 0.21 L.min-1.m-2, precision of 1.10 L.min-1.m-2 and a percentage error of 64%. For stroke volume the bias was 5 mL, precision of 24.46 mL and percentage error of 70.21%. Linear regression produced a correlation coefficient r2 for cardiac output, cardiac index, and stroke volume, of 0.403, 0.306, and 0.3 respectively. Compared to thermodilution cardiac output, cardiac output studies obtained from the Mostcare monitor have an unacceptably high error rate. The Mostcare monitor demonstrated to be an unreliable monitoring device to measure cardiac output in patients with severe sepsis and septic shock on an intensive care unit.
Improving the developability profile of pyrrolidine progesterone receptor partial agonists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallander, Lara S.; Washburn, David G.; Hoang, Tram H.
2010-09-17
The previously reported pyrrolidine class of progesterone receptor partial agonists demonstrated excellent potency but suffered from serious liabilities including hERG blockade and high volume of distribution in the rat. The basic pyrrolidine amine was intentionally converted to a sulfonamide, carbamate, or amide to address these liabilities. The evaluation of the degree of partial agonism for these non-basic pyrrolidine derivatives and demonstration of their efficacy in an in vivo model of endometriosis is disclosed herein.
Thermal equation of state of TiC: A synchrotron x-ray diffraction study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Xiaohui; National Lab for Condensed Matter Physics, Institute of Physics, CAS, Beijing 100080; Department of Physics, University of Science and Technology of China, Hefei 230026
2010-06-15
The pressure-volume-temperature measurements were carried out for titanium carbide (TiC) at pressures and temperatures up to 8.1 GPa and 1273 K using energy-dispersive synchrotron x-ray diffraction. Thermoelastic parameters were derived for TiC based on a modified high-temperature Birch-Murnaghan equation of state and a thermal pressure approach. With the pressure derivative of the bulk modulus, K{sub 0}{sup '}, fixed at 4.0, we obtain: the ambient bulk modulus K{sub 0}=268(6) GPa, which is comparable to previously reported value; temperature derivative of bulk modulus at constant pressure ({partial_derivative}K{sub T}/{partial_derivative}T){sub P}=-0.026(9) GPa K{sup -1}, volumetric thermal expansivity {alpha}{sub T}(K{sup -1})=a+bT with a=1.62(12)x10{sup -5} K{supmore » -1} and b=1.07(17)x10{sup -8} K{sup -2}, pressure derivative of thermal expansion ({partial_derivative}{alpha}/{partial_derivative}P){sub T}=(-3.62{+-}1.14)x10{sup -7} GPa{sup -1} K{sup -1}, and temperature derivative of bulk modulus at constant volume ({partial_derivative}K{sub T}/{partial_derivative}T){sub V}=-0.015(8) GPa K{sup -1}. These results provide fundamental thermophysical properties for TiC for the first time and are important to theoretical and computational modeling of transition metal carbides.« less
Thermal equation-of-state of TiC: a synchrotron x-ray diffraction study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Xiaohui; Lin, Zhijun; Zhang, Jianzhong
2009-01-01
The pressure (P)-volume (V)-temperature (T) measurements were carried out for titanium carbide at pressures and temperatures up to 8.1 GPa and 1273 K using energy-dispersive synchrotron x-ray diffraction. Thermoelastic parameters were derived for TiC based on a modified high-temperature Birch-Murnaghan equation of state and a thermal-pressure approach. With the pressure derivative of the bulk modulus, K'{sub 0}, fixed at 4.0, we obtain: the ambient bulk modulus K{sub 0} = 268(6) GPa, temperature derivative of bulk modulus at constant pressure ({partial_derivative}K{sub T}/{partial_derivative}T){sub p} = -0.026(9) GPa K{sup -1}, volumetric thermal expansivity a{sub T}(K{sup -1}) = a + bT with a =more » 1.62(12) x 10{sup -5} K{sup -1} and b = 1.07(17) x 10{sup -8} K{sup -2}, pressure derivative of thermal expansion ({partial_derivative}a/{partial_derivative}P){sub T} = (-3.62 {+-} 1.14) x 10{sup -7} GPa{sup -1} K{sup -1}, and temperature derivative of bulk modulus at constant volume ({partial_derivative}K{sub T}/{partial_derivative}T){sub v} = -0.015 (8) GPa K{sup -1}. These results provide fundamental thermo physical properties for TiC and are important to theoretical and computational modeling of transition metal carbides.« less
Two-fold sustainability – Adobe with sawdust as partial sand replacement
NASA Astrophysics Data System (ADS)
Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen
2018-04-01
Adobe is a material that is economic, environment friendly, and provides better indoor air quality. The material required for the preparation of adobe include clay, sand, and sometimes straw or other organic materials. These materials do not require industrial processing or transportation, however, sand mining has been recently posing a threat to the environment. Therefore, to enhance the existing sustainability of adobe, sand can be partially or fully replaced by other waste materials. This approach will not only solve the problem of excessive sand mining, it will also address the issue of waste management. Sawdust is one such waste material that can be used to partially replace sand in Adobe. This paper presents the results of compressive and flexural test carried out on Adobe samples with partial sand replacement by sawdust. The results show that about 4% sand replacement by volume produces higher compressive strength, whereas the flexural strength reduces with the use of sawdust. However, since flexural strength is not a critical property for adobe, it is concluded that replacing sand with sawdust by about 4% of volume will be beneficial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durlin, R.R.; Schaffstall, W.P.
1997-02-01
This report, Volume, 2, includes record from the Susquehanna and Potomac River Basins. Specifically, it contains: (1) discharge records for 90 continuous-record streamflow-gaging stations and 41 partial-record stations; (2) elevation and contents record for 12 lakes and reservoirs; (3) water-quality records for 13 streamflow-gaging stations and 189 partial-record and project stations; and (4) water-level records for 25 network observation wells. Site locations are shown in figures throughout the report. Additional water data collected at various sites not involved in the systematic data-collection program are also presented.
1988-11-16
oxoireamorine. Oxotremorine inhibited Ach release for a shorter duration. The ACh release-inhibition induced by 50 PH Aech-M was only acutely blocked...and the partial agonist oxotresorine in the presence of 59K of Lit. Carbacbol showed a 10-fold lover ECS0-value (3.7liM) and oxotremorine a 3-fold...however, only a decrease in the maximal response was seen. In both tissues, the maximal, response of the partial agonist oxotremorine was suppressed
Operational considerations in monitoring oxygen levels at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Zalenski, M. A.; Rowe, E. L.; Mcphee, J. R.
1985-01-01
Laboratory monitoring of the level of oxygen in sample gas mixtures is a process which can be performed with accurate and repeatable results. Operations at the National Transonic Facility require the storage and pumping of large volumes of liquid nitrogen. To protect against the possibility of a fault resulting in a localized oxygen deficient atmosphere, the facility is equipped with a monitoring system with an array of sensors. During the early operational stages, the system produced recurrent alarms, none of which could be traced to a true oxygen deficiency. A thorough analysis of the system was undertaken with primary emphasis placed on the sensor units. These units sense the partial pressure of oxygen which, after signal conditioning, is presented as a % by volume indication at the system output. It was determined that many of the problems experienced were due to a lack of proper accounting for the partial pressure/% by volume relationship, with a secondary cause being premature sensor failure. Procedures were established to consider atmospherically induced partial pressure variations. Sensor rebuilding techniques were examined, and those elements contributing to premature sensor failure were identified. The system now operates with a high degree of confidence and reliability.
Vapor condensation onto a non-volatile liquid drop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inci, Levent; Bowles, Richard K., E-mail: richard.bowles@usask.ca
2013-12-07
Molecular dynamics simulations of miscible and partially miscible binary Lennard–Jones mixtures are used to study the dynamics and thermodynamics of vapor condensation onto a non-volatile liquid drop in the canonical ensemble. When the system volume is large, the driving force for condensation is low and only a submonolayer of the solvent is adsorbed onto the liquid drop. A small degree of mixing of the solvent phase into the core of the particles occurs for the miscible system. At smaller volumes, complete film formation is observed and the dynamics of film growth are dominated by cluster-cluster coalescence. Mixing into the coremore » of the droplet is also observed for partially miscible systems below an onset volume suggesting the presence of a solubility transition. We also develop a non-volatile liquid drop model, based on the capillarity approximations, that exhibits a solubility transition between small and large drops for partially miscible mixtures and has a hysteresis loop similar to the one observed in the deliquescence of small soluble salt particles. The properties of the model are compared to our simulation results and the model is used to study the formulation of classical nucleation theory for systems with low free energy barriers.« less
NASA Astrophysics Data System (ADS)
Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom
2016-04-01
Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA optimization.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
Estimating merchantable tree volume in Oregon and Washington using stem profile models
Raymond L. Czaplewski; Amy S. Brown; Dale G. Guenther
1989-01-01
The profile model of Max and Burkhart was fit to eight tree species in the Pacific Northwest Region (Oregon and Washington) of the Forest Service. Most estimates of merchantable volume had an average error less than 10% when applied to independent test data for three national forests.
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689
Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; China, Toshiyuki; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Horie, Shigeo
2015-05-01
To evaluate the feasibility and accuracy of virtual partial nephrectomy analysis, including a color-coded three-dimensional virtual surgical planning and a quantitative functional analysis, in predicting the surgical outcomes of robot-assisted partial nephrectomy. Between 2012 and 2014, 20 patients underwent virtual partial nephrectomy analysis before undergoing robot-assisted partial nephrectomy. Virtual partial nephrectomy analysis was carried out with the following steps: (i) evaluation of the arterial branch for selective clamping by showing the vascular-supplied area; (ii) simulation of the optimal surgical margin in precise segmented three-dimensional model for prediction of collecting system opening; and (iii) detailed volumetric analyses and estimates of postoperative renal function based on volumetric change. At operation, the surgeon identified the targeted artery and determined the surgical margin according to the virtual partial nephrectomy analysis. The surgical outcomes between the virtual partial nephrectomy analysis and the actual robot-assisted partial nephrectomy were compared. All 20 patients had negative cancer surgical margins and no urological complications. The tumor-specific renal arterial supply areas were shown in color-coded three-dimensional model visualization in all cases. The prediction value of collecting system opening was 85.7% for sensitivity and 100% for specificity. The predicted renal resection volume was significantly correlated with actual resected specimen volume (r(2) = 0.745, P < 0.001). The predicted estimated glomerular filtration rate was significantly correlated with actual postoperative estimated glomerular filtration rate (r(2) = 0.736, P < 0.001). Virtual partial nephrectomy analysis is able to provide the identification of tumor-specific renal arterial supply, prediction of collecting system opening and prediction of postoperative renal function. This technique might allow urologists to compare various arterial clamping methods and resection margins with surgical outcomes in a non-invasive manner. © 2015 The Japanese Urological Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Elizabeth S.; Prosnitz, Robert G.; Yu Xiaoli
2006-11-15
Purpose: The aim of this study was to assess the impact of patient-specific factors, left ventricle (LV) volume, and treatment set-up errors on the rate of perfusion defects 6 to 60 months post-radiation therapy (RT) in patients receiving tangential RT for left-sided breast cancer. Methods and Materials: Between 1998 and 2005, a total of 153 patients were enrolled onto an institutional review board-approved prospective study and had pre- and serial post-RT (6-60 months) cardiac perfusion scans to assess for perfusion defects. Of the patients, 108 had normal pre-RT perfusion scans and available follow-up data. The impact of patient-specific factors onmore » the rate of perfusion defects was assessed at various time points using univariate and multivariate analysis. The impact of set-up errors on the rate of perfusion defects was also analyzed using a one-tailed Fisher's Exact test. Results: Consistent with our prior results, the volume of LV in the RT field was the most significant predictor of perfusion defects on both univariate (p = 0.0005 to 0.0058) and multivariate analysis (p = 0.0026 to 0.0029). Body mass index (BMI) was the only significant patient-specific factor on both univariate (p = 0.0005 to 0.022) and multivariate analysis (p = 0.0091 to 0.05). In patients with very small volumes of LV in the planned RT fields, the rate of perfusion defects was significantly higher when the fields set-up 'too deep' (83% vs. 30%, p = 0.059). The frequency of deep set-up errors was significantly higher among patients with BMI {>=}25 kg/m{sup 2} compared with patients of normal weight (47% vs. 28%, p = 0.068). Conclusions: BMI {>=}25 kg/m{sup 2} may be a significant risk factor for cardiac toxicity after RT for left-sided breast cancer, possibly because of more frequent deep set-up errors resulting in the inclusion of additional heart in the RT fields. Further study is necessary to better understand the impact of patient-specific factors and set-up errors on the development of RT-induced perfusion defects.« less
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Validity of Three-Dimensional Photonic Scanning Technique for Estimating Percent Body Fat.
Shitara, K; Kanehisa, H; Fukunaga, T; Yanai, T; Kawakami, Y
2013-01-01
Three-dimensional photonic scanning (3DPS) was recently developed to measure dimensions of a human body surface. The purpose of this study was to explore the validity of body volume measured by 3DPS for estimating the percent body fat (%fat). Design, setting, participants, and measurement: The body volumes were determined by 3DPS in 52 women. The body volume was corrected for residual lung volume. The %fat was estimated from body density and compared with the corresponding reference value determined by the dual-energy x-ray absorptiometry (DXA). No significant difference was found for the mean values of %fat obtained by 3DPS (22.2 ± 7.6%) and DXA (23.5 ± 4.9%). The root mean square error of %fat between 3DPS and reference technique was 6.0%. For each body segment, there was a significant positive correlation between 3DPS- and DXA-values, although the corresponding value for the head was slightly larger in 3DPS than in DXA. Residual lung volume was negatively correlated with the estimated error in %fat. The body volume determined with 3DPS is potentially useful for estimating %fat. A possible strategy for enhancing the measurement accuracy of %fat might be to refine the protocol for preparing the subject's hair prior to scanning and to improve the accuracy in the measurement of residual lung volume.
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
Precast concrete deck panel performance on long span, high traffic volume bridges : final report.
DOT National Transportation Integrated Search
2006-02-01
The NHDOT prohibited the use of partial depth precast deck panels on its long span, high traffic volume bridges until it could investigate if the precast : slabs and the concrete overpour were acting in a composite manner. The NHDOT also wanted to en...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, JY; Hong, DL
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed.more » Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.« less
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
NASA Astrophysics Data System (ADS)
Xu, Liangfei; Hu, Junming; Cheng, Siliang; Fang, Chuan; Li, Jianqiu; Ouyang, Minggao; Lehnert, Werner
2017-07-01
A scheme for designing a second-order sliding-mode (SOSM) observer that estimates critical internal states on the cathode side of a polymer electrolyte membrane (PEM) fuel cell system is presented. A nonlinear, isothermal dynamic model for the cathode side and a membrane electrolyte assembly are first described. A nonlinear observer topology based on an SOSM algorithm is then introduced, and equations for the SOSM observer deduced. Online calculation of the inverse matrix produces numerical errors, so a modified matrix is introduced to eliminate the negative effects of these on the observer. The simulation results indicate that the SOSM observer performs well for the gas partial pressures and air stoichiometry. The estimation results follow the simulated values in the model with relative errors within ± 2% at stable status. Large errors occur during the fast dynamic processes (<1 s). Moreover, the nonlinear observer shows good robustness against variations in the initial values of the internal states, but less robustness against variations in system parameters. The partial pressures are more sensitive than the air stoichiometry to system parameters. Finally, the order of effects of parameter uncertainties on the estimation results is outlined and analyzed.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Reproducible MRI Measurement of Adipose Tissue Volumes in Genetic and Dietary Rodent Obesity Models
Johnson, David H.; Flask, Chris A.; Ernsberger, Paul R.; Wong, Wilbur C. K.; Wilson, David L.
2010-01-01
Purpose To develop ratio MRI [lipid/(lipid+water)] methods for assessing lipid depots and compare measurement variability to biological differences in lean controls (spontaneously hypertensive rats, SHRs), dietary obese (SHR-DO), and genetic/dietary obese (SHROBs) animals. Materials and Methods Images with and without CHESS water-suppression were processed using a semi-automatic method accounting for relaxometry, chemical shift, receive coil sensitivity, and partial volume. Results Partial volume correction improved results by 10–15%. Over six operators, volume variation was reduced to 1.9 ml from 30.6 ml for single-image-analysis with intensity inhomogeneity. For three acquisitions on the same animal, volume reproducibility was <1%. SHROBs had 6X visceral and 8X subcutaneous adipose tissue than SHRs. SHR-DOs had enlarged visceral depots (3X SHRs). SHROB had significantly more subcutaneous adipose tissue, indicating a strong genetic component to this fat depot. Liver ratios in SHR-DO and SHROB were higher than SHR, indicating elevated fat content. Among SHROBs, evidence suggested a phenotype SHROB* having elevated liver ratios and visceral adipose tissue volumes. Conclusion Effects of diet and genetics on obesity were significantly larger than variations due to image acquisition and analysis, indicating that these methods can be used to assess accumulation/depletion of lipid depots in animal models of obesity. PMID:18821617
TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, E; Phillips, M; Bojechko, C
Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less
3D early embryogenesis image filtering by nonlinear partial differential equations.
Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O
2010-08-01
We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.
Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.
1999-01-01
Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holroyd, R.A.; Schwarz, H.A.; Stradowska, E.
The rate constants for attachment of excess electrons to 1,3-butadiene (k[sub a]) and detachment from the butadiene anion (k[sub d]) in n-hexane are reported. The equilibrium constant, K[sub eq] = k[sub a]/k[sub d], increases rapidly with pressure and decreases as the temperature increases. At -7[degree]C attachment is observed at 1 bar. At high pressures the attachment rate is diffusion controlled. The activation energy for detachment is about 21 kcal/mol; detachment is facilitated by the large entropy of activation. The reaction volumes for attachment range from -181 cm[sup 3]/mol at 400 bar to-122 cm[sup 3]/mol at 1500 bar and are largelymore » attributed to the electrostriction volume of the butadiene anion ([Delta][bar V][sub el]). Values of [Delta][bar V][sub el] calculated by a model, which includes a glassy shell of solvent molecules around the ion, are in agreement with experimental reaction volumes. The analysis indicates the partial molar volume of the electron in hexane is small and probably negative. It is shown that the entropies of reaction are closely related to the partial molar volumes of reaction. 22 refs., 5 figs., 5 tabs.« less
[NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].
Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang
2015-08-01
Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.
Pedagogical Materials 1. The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf, Ed.
The first volume in this series on Serbo-Croatian-English contrastive analysis contains six articles. They are: "Contrastive Analysis and Error Analysis in Pedagogical Materials," by Rudolf Filipovic; "Errors in the Morphology and Syntax of the Parts of Speech in the English of Learners from the Serbo-Croatian-Speaking Area," by Vera Andrassy;…
40 CFR 1045.730 - What ABT reports must I send to EPA?
Code of Federal Regulations, 2010 CFR
2010-07-01
... volumes for the model year with a point of retail sale in the United States, as described in § 1045.701(j...) Show that your net balance of emission credits from all your participating families in each averaging... errors mistakenly decreased your balance of emission credits, you may correct the errors and recalculate...
Weighted linear regression using D2H and D2 as the independent variables
Hans T. Schreuder; Michael S. Williams
1998-01-01
Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...
Quality in the Basic Grant Delivery System: Volume 2, Corrective Actions.
ERIC Educational Resources Information Center
Advanced Technology, Inc., McLean, VA.
Alternative management procedures are recommended that may lower the rate and magnitude of errors in the award of the Basic Educational Opportunity Grants (BEOGs), or Pell Grants. The recommendations are part of the BEOG quality control project and are based on a review of current (1980-1981) levels, distribution, and significance of error in the…
Frequency synchronization of a frequency-hopped MFSK communication system
NASA Technical Reports Server (NTRS)
Huth, G. K.; Polydoros, A.; Simon, M. K.
1981-01-01
This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.
Stitching of near-nulled subaperture measurements
NASA Technical Reports Server (NTRS)
Devries, Gary (Inventor); Brophy, Christopher (Inventor); Forbes, Greg (Inventor); Murphy, Paul (Inventor)
2012-01-01
A metrology system for measuring aspheric test objects by subaperture stitching. A wavefront-measuring gauge having a limited capture range of wavefront shapes collects partially overlapping subaperture measurements over the test object. A variable optical aberrator reshapes the measurement wavefront with between a limited number of the measurements to maintain the measurement wavefront within the capture range of the wavefront-measuring gauge. Various error compensators are incorporated into a stitching operation to manage residual errors associated with the use of the variable optical aberrator.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Exception handling for sensor fusion
NASA Astrophysics Data System (ADS)
Chavez, G. T.; Murphy, Robin R.
1993-08-01
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
Response Monitoring and Adjustment: Differential Relations with Psychopathic Traits
Bresin, Konrad; Finy, M. Sima; Sprague, Jenessa; Verona, Edelyn
2014-01-01
Studies on the relation between psychopathy and cognitive functioning often show mixed results, partially because different factors of psychopathy have not been considered fully. Based on previous research, we predicted divergent results based on a two-factor model of psychopathy (interpersonal-affective traits and impulsive-antisocial traits). Specifically, we predicted that the unique variance of interpersonal-affective traits would be related to increased monitoring (i.e., error-related negativity) and adjusting to errors (i.e., post-error slowing), whereas impulsive-antisocial traits would be related to reductions in these processes. Three studies using a diverse selection of assessment tools, samples, and methods are presented to identify response monitoring correlates of the two main factors of psychopathy. In Studies 1 (undergraduates), 2 (adolescents), and 3 (offenders), interpersonal-affective traits were related to increased adjustment following errors and, in Study 3, to enhanced monitoring of errors. Impulsive-antisocial traits were not consistently related to error adjustment across the studies, although these traits were related to a deficient monitoring of errors in Study 3. The results may help explain previous mixed findings and advance implications for etiological models of psychopathy. PMID:24933282
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
Single-molecule Protein Unfolding in Solid State Nanopores
Talaga, David S.; Li, Jiali
2009-01-01
We use single silicon nitride nanopores to study folded, partially folded and unfolded single proteins by measuring their excluded volumes. The DNA-calibrated translocation signals of β-lactoglobulin and histidine-containing phosphocarrier protein match quantitatively with that predicted by a simple sum of the partial volumes of the amino acids in the polypeptide segment inside the pore when translocation stalls due to the primary charge sequence. Our analysis suggests that the majority of the protein molecules were linear or looped during translocation and that the electrical forces present under physiologically relevant potentials can unfold proteins. Our results show that the nanopore translocation signals are sensitive enough to distinguish the folding state of a protein and distinguish between proteins based on the excluded volume of a local segment of the polypeptide chain that transiently stalls in the nanopore due to the primary sequence of charges. PMID:19530678
NASA Astrophysics Data System (ADS)
Dikkar, A. B.; Pethe, G. B.; Aswar, A. S.
2015-12-01
Density (ρ), speed of sound ( u), and viscosity (η), measurements have been carried on 2-hydroxy- 5-chloro-3-nitroacetophenone isonicotinoylhydrazone (HCNAIH) in N, N-dimethylformamide at 298.15, 303.15, 308.15, and 313.15 K. Adiabatic compressibility (βs), intermolecular free length ( L f), acoustic impedance ( Z), internal pressure ( P int), the apparent molar volume ( V w), limiting apparent molar volume ( V w 0), partial molar expansibility (wE 0), apparent molar adiabatic compressibility ( K w), limiting apparent molar adiabatic compressibility ( K w 0), viscosity B coefficients of Jones-Dole equation have been calculated. The activation free energy (Δμ 2 0 *) for viscous flow in solution have been calculated from B coefficient and partial molar volume data. The calculated parameters are used to interpret the solute-solvent interactions and structure forming/breaking ability of solute in DMF.
Prolonged partial cardiopulmonary bypass in rats.
Alexander, B; Al Ani, H R
1983-07-01
Membrane oxygenators have been shown to be atraumatic during cardiopulmonary bypass. A novel design for a membrane tubing oxygenator originated in this laboratory was used for prolonged partial supportive cardiopulmonary bypass in lambs and displayed excellent biocompatability characteristics. This was miniaturized, to result in a prime volume of 12 ml, in order to investigate the feasibility of prolonged partial supportive cardiopulmonary bypass in rats. The performance of this miniaturized circuit over perfusion periods up to 6 hr is described, with particular reference to hematological changes.
Knollmann, Friedrich D; Kumthekar, Rohan; Fetzer, David; Socinski, Mark A
2014-03-01
We set out to investigate whether volumetric tumor measurements allow for a prediction of treatment response, as measured by patient survival, in patients with advanced non-small-cell lung cancer (NSCLC). Patients with nonresectable NSCLC (stage III or IV, n = 100) who were repeatedly evaluated for treatment response by computed tomography (CT) were included in a Health Insurance Portability and Accountability Act (HIPAA)-compliant retrospective study. Tumor response was measured by comparing tumor volumes over time. Patient survival was compared with Response Evaluation Criteria in Solid Tumors (RECIST) using Kaplan-Meier survival statistics and Cox regression analysis. The median overall patient survival was 553 days (standard error, 146 days); for patients with stage III NSCLC, it was 822 days, and for patients with stage IV disease, 479 days. The survival differences were not statistically significant (P = .09). According to RECIST, 5 patients demonstrated complete response, 39 partial response, 44 stable disease, and 12 progressive disease. Patient survival was not significantly associated with RECIST class, the change of the sum of tumor diameters (P = .98), nor the change of the sum of volumetric tumor dimensions (P = .17). In a group of 100 patients with advanced-stage NSCLC, neither volumetric CT measurements of changes in tumor size nor RECIST class significantly predicted patient survival. This observation suggests that size response may not be a sufficiently precise surrogate marker of success to steer treatment decisions in individual patients. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun
2014-08-15
Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less
Davies, Mark W; Dunster, Kimble R
2002-05-01
To compare measured tidal volumes with and without perfluorocarbon (perfluorooctyl bromide) vapor, by using tidal volumes in the range suitable for neonates ventilated with partial liquid ventilation. We also aimed to determine the correction factor needed to calculate tidal volumes measured in the presence of perfluorooctyl bromide vapor. Prospective, experimental study. Neonatal research laboratory. Reproducible tidal volumes from 5 to 30 mL were produced with a rodent ventilator and drawn from humidifier chambers immersed in a water bath at 37 degrees C. Control tidal volumes were drawn from a chamber containing oxygen and water vapor, and the perfluorocarbon tidal volumes were drawn from a chamber containing oxygen, water vapor, and perfluorooctyl bromide vapor. Tidal volumes were measured by a VenTrak respiratory mechanics monitor with a neonatal flow sensor and a Dräger pneumotachometer attached to a Dräger neonatal ventilator. All tidal volumes measured with perfluorooctyl bromide vapor were increased compared with control. The VenTrak-measured tidal volumes increased by 1.8% to 3.5% (an overall increase of 2.2%). The increase was greater with the Dräger hot-wire anemometer: from 2.4% to 6.1% (an overall increase of 5.9%). Regression equations for mean control tidal volumes (response, Y) vs. mean perfluorooctyl bromide tidal volumes (predictor, X) are as follows: for the VenTrak, Y = -0.026 + (0.978 x X), r =.9999, p <.0001; and for the Dräger, Y = 0.251 + (0.944 x X), r =.9996, p <.0001. The presence of perfluorooctyl bromide vapor in the gas flowing through pneumotachometers gives falsely high tidal volume measurements. An estimate of the true tidal volume allowing for the presence of perfluorooctyl bromide vapor can be made from regression equations. Any calculation of lung mechanics must take into account the effect of perfluorooctyl bromide vapor on the measurement of tidal volume.
Measuring Compositions in Organic Depth Profiling: Results from a VAMAS Interlaboratory Study.
Shard, Alexander G; Havelund, Rasmus; Spencer, Steve J; Gilmore, Ian S; Alexander, Morgan R; Angerer, Tina B; Aoyagi, Satoka; Barnes, Jean-Paul; Benayad, Anass; Bernasik, Andrzej; Ceccone, Giacomo; Counsell, Jonathan D P; Deeks, Christopher; Fletcher, John S; Graham, Daniel J; Heuser, Christian; Lee, Tae Geol; Marie, Camille; Marzec, Mateusz M; Mishra, Gautam; Rading, Derk; Renault, Olivier; Scurr, David J; Shon, Hyun Kyong; Spampinato, Valentina; Tian, Hua; Wang, Fuyi; Winograd, Nicholas; Wu, Kui; Wucher, Andreas; Zhou, Yufan; Zhu, Zihua; Cristaudo, Vanina; Poleunis, Claude
2015-08-20
We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) interlaboratory study on the measurement of composition in organic depth profiling. Layered samples with known binary compositions of Irganox 1010 and either Irganox 1098 or Fmoc-pentafluoro-l-phenylalanine in each layer were manufactured in a single batch and distributed to more than 20 participating laboratories. The samples were analyzed using argon cluster ion sputtering and either X-ray photoelectron spectroscopy (XPS) or time-of-flight secondary ion mass spectrometry (ToF-SIMS) to generate depth profiles. Participants were asked to estimate the volume fractions in two of the layers and were provided with the compositions of all other layers. Participants using XPS provided volume fractions within 0.03 of the nominal values. Participants using ToF-SIMS either made no attempt, or used various methods that gave results ranging in error from 0.02 to over 0.10 in volume fraction, the latter representing a 50% relative error for a nominal volume fraction of 0.2. Error was predominantly caused by inadequacy in the ability to compensate for primary ion intensity variations and the matrix effect in SIMS. Matrix effects in these materials appear to be more pronounced as the number of atoms in both the primary analytical ion and the secondary ion increase. Using the participants' data we show that organic SIMS matrix effects can be measured and are remarkably consistent between instruments. We provide recommendations for identifying and compensating for matrix effects. Finally, we demonstrate, using a simple normalization method, that virtually all ToF-SIMS participants could have obtained estimates of volume fraction that were at least as accurate and consistent as XPS.
Measuring Compositions in Organic Depth Profiling: Results from a VAMAS Interlaboratory Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shard, A. G.; Havelund, Rasmus; Spencer, Steve J.
We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) interlaboratory study on the measurement of composition in organic depth profiling. Layered samples with known binary compositions of Irganox 1010 and either Irganox 1098 or Fmoc-pentafluoro-L-phenylalanine in each layer were manufactured in a single batch and distributed to more than 20 participating laboratories. The samples were analyzed using argon cluster ion sputtering and either X-ray Photoelectron Spectroscopy (XPS) or Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) to generate depth profiles. Participants were asked to estimate the volume fractions in two of the layers and were provided withmore » the compositions of all other layers. Participants using XPS provided volume fractions within 0.03 of the nominal values. Participants using ToF-SIMS either made no attempt, or used various methods that gave results ranging in error from 0.02 to over 0.10 in volume fraction, the latter representing a 50% relative error for a nominal volume fraction of 0.2. Error was predominantly caused by inadequacy in the ability to compensate for primary ion intensity variations and the matrix effect in SIMS. Matrix effects in these materials appear to be more pronounced as the number of atoms in both the primary analytical ion and the secondary ion increase. Using the participants’ data we show that organic SIMS matrix effects can be measured and are remarkably consistent between instruments. We provide recommendations for identifying and compensating for matrix effects. Finally we demonstrate, using a simple normalization method, that virtually all ToF-SIMS participants could have obtained estimates of volume fraction that were at least as accurate and consistent as XPS.« less
46 CFR 39.40-5 - Operational requirements for vapor balancing-TB/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... tanks have partial bulkheads, the oxygen content of each area of that tank formed by each partial... vapor collection system must be tested prior to cargo transfer to ensure that the oxygen content in the vapor space does not exceed 8 percent by volume. The oxygen content of each tank must be measured at a...
Floyd A. Johnson
1961-01-01
This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...
NASA Astrophysics Data System (ADS)
Juddoo, Mrinal; Masri, Assaad R.; Pope, Stephen B.
2011-12-01
This paper reports measured stability limits and PDF calculations of piloted, turbulent flames of compressed natural gas (CNG) partially-premixed with either pure oxygen, or with varying levels of O2/N2. Stability limits are presented for flames of CNG fuel premixed with up to 20% oxygen as well as CNG-O2-N2 fuel where the O2 content is varied from 8 to 22% by volume. Calculations are presented for (i) Sydney flame B [Masri et al. 1988] which uses pure CNG as well as flames B15 to B25 where the CNG is partially-premixed with 15-25% oxygen by volume, respectively and (ii) Sandia methane-air (1:3 by volume) flame E [Barlow et al. 2005] as well as new flames E15 and E25 that are partially-premixed with 'reconstituted air' where the O2 content in nitrogen is 15 and 25% by volume, respectively. The calculations solve a transported PDF of composition using a particle-based Monte Carlo method and employ the EMST mixing model as well as detailed chemical kinetics. The addition of oxygen to the fuel increases stability, shortens the flames, broadens the reaction zone, and shifts the stoichiometric mixture fraction towards the inner side of the jet. It is found that for pure CNG flames where the reaction zone is narrow (∼0.1 in mixture fraction space), the PDF calculations fail to reproduce the correct level of local extinction on approach to blow-off. A broadening in the reaction zone up to about 0.25 in mixture fraction space is needed for the PDF/EMST approach to be able to capture these finite-rate chemistry effects. It is also found that for the same level of partial premixing, increasing the O2/N2 ratio increases the maximum levels of CO and NO but shifts the peak to richer mixture fractions. Over the range of oxygenation investigated here, stability limits have shown to improve almost linearly with increasing oxygen levels in the fuel and with increasing the contribution of release rate from the pilot.
Ge, Zhi-pu; Ma, Ruo-han; Li, Gang; Zhang, Ji-zong; Ma, Xu-chen
2015-08-01
To establish a method that can be used for human age estimation on the basis of pulp chamber volume of first molars and to identify whether the method is good enough for age estimation in real human cases. CBCT images of 373 maxillary first molars and 372 mandibular first molars were collected to establish the mathematical model from 190 female and 213 male patients whose age between 12 and 69 years old. The inclusion criteria of the first molars were: no caries, no excessive tooth wear, no dental restorations, no artifacts due to metal restorative materials present in adjacent teeth, and no pulpal calcification. All the CBCT images were acquired with a CBCT unit NewTom VG (Quantitative Radiology, Verona, Italy) and reconstructed with a voxel-size of 0.15mm. The images were subsequently exported as DICOM data sets and imported into an open source 3D image semi-automatic segmenting and voxel-counting software ITK-SNAP 2.4 for the calculation of pulp chamber volumes. A logarithmic regression analysis was conducted with age as dependent variable and pulp chamber volume as independent variables to establish a mathematical model for the human age estimation. To identify the precision and accuracy of the model for human age estimation, another 104 maxillary first molars and 103 mandibular first molars from 55 female and 57 male patients whose age between 12 and 67 years old were collected, too. Mean absolute error and root mean square error between the actual age and estimated age were used to determine the precision and accuracy of the mathematical model. The study was approved by the Institutional Review Board of Peking University School and Hospital of Stomatology. A mathematical model was suggested for: AGE=117.691-26.442×ln (pulp chamber volume). The regression was statistically significant (p=0.000<0.01). The coefficient of determination (R(2)) was 0.564. There is a mean absolute error of 8.122 and root mean square error of 5.603 between the actual age and estimated age for all the tested teeth. The pulp chamber volume of first molar is a useful index for the estimation of human age with reasonable precision and accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Assessment of the Derivative-Moment Transformation method for unsteady-load estimation
NASA Astrophysics Data System (ADS)
Mohebbian, Ali; Rival, David
2011-11-01
It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.