Palmstrom, Christin R.
2015-01-01
There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858
A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.
Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki
2013-01-01
In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Astrophysics Data System (ADS)
Wheeler, K.; Knuth, K.; Castle, P.
2005-12-01
and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.
Organ volume estimation using SPECT
Zaidi, H.
1996-06-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang`s algorithm. The dual window method was used for scatter subtraction. The author used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of (1) fixed thresholding, (2) automatic thresholding, (3) attenuation, (4) scatter, and (5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are performed. The relative error is within 7% for the GLH method combined with attenuation and scatter corrections.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate colon residue detection algorithm with partial volume segmentation
NASA Astrophysics Data System (ADS)
Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.
2004-05-01
Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Age estimation from canine volumes.
De Angelis, Danilo; Gaudio, Daniel; Guercini, Nicola; Cipriani, Filippo; Gibelli, Daniele; Caputi, Sergio; Cattaneo, Cristina
2015-08-01
Techniques for estimation of biological age are constantly evolving and are finding daily application in the forensic radiology field in cases concerning the estimation of the chronological age of a corpse in order to reconstruct the biological profile, or of a living subject, for example in cases of immigration of people without identity papers from a civil registry. The deposition of teeth secondary dentine and consequent decrease of pulp chamber in size are well known as aging phenomena, and they have been applied to the forensic context by the development of age estimation procedures, such as Kvaal-Solheim and Cameriere methods. The present study takes into consideration canines pulp chamber volume related to the entire teeth volume, with the aim of proposing new regression formulae for age estimation using 91 cone beam computerized scans and a freeware open-source software, in order to permit affordable reproducibility of volumes calculation.
Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.
ERIC Educational Resources Information Center
Gerstel, Sanford M.
1986-01-01
An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Estimation of feline renal volume using computed tomography and ultrasound.
Tyson, Reid; Logsdon, Stacy A; Werre, Stephen R; Daniel, Gregory B
2013-01-01
Renal volume estimation is an important parameter for clinical evaluation of kidneys and research applications. A time efficient, repeatable, and accurate method for volume estimation is required. The purpose of this study was to describe the accuracy of ultrasound and computed tomography (CT) for estimating feline renal volume. Standardized ultrasound and CT scans were acquired for kidneys of 12 cadaver cats, in situ. Ultrasound and CT multiplanar reconstructions were used to record renal length measurements that were then used to calculate volume using the prolate ellipsoid formula for volume estimation. In addition, CT studies were reconstructed at 1 mm, 5 mm, and 1 cm, and transferred to a workstation where the renal volume was calculated using the voxel count method (hand drawn regions of interest). The reference standard kidney volume was then determined ex vivo using water displacement with the Archimedes' principle. Ultrasound measurement of renal length accounted for approximately 87% of the variability in renal volume for the study population. The prolate ellipsoid formula exhibited proportional bias and underestimated renal volume by a median of 18.9%. Computed tomography volume estimates using the voxel count method with hand-traced regions of interest provided the most accurate results, with increasing accuracy for smaller voxel sizes in grossly normal kidneys (-10.1 to 0.6%). Findings from this study supported the use of CT and the voxel count method for estimating feline renal volume in future clinical and research studies.
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
Accurate genome relative abundance estimation based on shotgun metagenomic reads.
Xia, Li C; Cram, Jacob A; Chen, Ting; Fuhrman, Jed A; Sun, Fengzhu
2011-01-01
Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.
Accurate absolute GPS positioning through satellite clock error estimation
NASA Astrophysics Data System (ADS)
Han, S.-C.; Kwon, J. H.; Jekeli, C.
2001-05-01
An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Askalany, Ahmed A; Saha, Bidyut B
2017-03-15
Accurate estimation of the isosteric heat of adsorption is mandatory for a good modeling of adsorption processes. In this paper a thermodynamic formalism on adsorbed phase volume which is a function of adsorption pressure and temperature has been proposed for the precise estimation of the isosteric heat of adsorption. The estimated isosteric heat of adsorption using the new correlation has been compared with measured values of prudently selected several adsorbent-refrigerant pairs from open literature. Results showed that the proposed isosteric heat of adsorption correlation fits the experimentally measured values better than the Clausius-Clapeyron equation.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Towards accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2016-12-13
Reliable estimates of animal density are fundamental to our understanding of ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation biology since wildlife authorities rely on these figures to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging species such as carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores. African lions (Panthera leo) provide an excellent example as although abundance indices have been shown to produce poor inferences, they continue to be used to estimate lion density and inform management and policy. In this study we adapt a Bayesian spatially explicit capture-recapture model to estimate lion density in the Maasai Mara National Reserve (MMNR) and surrounding conservancies in Kenya. We utilize sightings data from a three-month survey period to produce statistically rigorous spatial density estimates. Overall posterior mean lion density was estimated to be 16.85 (posterior standard deviation = 1.30) lions over one year of age per 100km(2) with a sex ratio of 2.2♀:1♂. We argue that such methods should be developed, improved and favored over less reliable methods such as track and call-up surveys. We caution against trend analyses based on surveys of differing reliability and call for a unified framework to assess lion numbers across their range in order for better informed management and policy decisions to be made. This article is protected by copyright. All rights reserved.
Accurate estimators of correlation functions in Fourier space
NASA Astrophysics Data System (ADS)
Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.
2016-08-01
Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.
How utilities can achieve more accurate decommissioning cost estimates
Knight, R.
1999-07-01
The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost
Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua
2012-06-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
Accurate Determination of the Volume of an Irregular Helium Balloon
NASA Astrophysics Data System (ADS)
Blumenthal, Jack; Bradvica, Rafaela; Karl, Katherine
2013-02-01
In a recent paper, Zable described an experiment with a near-spherical balloon filled with impure helium. Measuring the temperature and the pressure inside and outside the balloon, the lift of the balloon, and the mass of the balloon materials, he described how to use the ideal gas laws and Archimedes' principal to compute the average molecular mass and density of the impure helium. This experiment required that the volume of the near-spherical balloon be determined by some approach, such as measuring the girth. The accuracy of the experiment was largely determined by the balloon volume, which had a reported uncertainty of about 4%.
Accurate measurement of gas volumes by liquid displacement
NASA Technical Reports Server (NTRS)
Christian, J. D.
1972-01-01
Mariotte bottle as liquid displacement device was used to measure gas volumes at flow rates that are far below threshold of wet test gas meters. Study of factors affecting amount of liquid displaced by gas flow was completed, and equations were derived which relate different variables.
Volume estimation in a sequence of freehand ultrasound images
NASA Astrophysics Data System (ADS)
Zhang, Hongmei; Wan, Mingxi; Shen, Bo; Wang, Xiaodong; Lu, Mingzhu
2006-11-01
Volume estimation is particularly important in clinical medicine. Accurate volume estimation can provide quantitative information from which the follow-up therapy can be derived. In this paper, an efficient approach to volume estimation in a sequence of freehand ultrasound images is proposed. By integral of vector areas along the path of centroids of serial cross-sections, 3D volume estimation can be represented as 2D area calculation, where a fast mapping algorithm generating 2D representation is presented so that the position of interpolation points can be calculated with high efficiency. Meanwhile, to improve the accuracy, the cubic spline with second-order continuity is proposed for interpolation of 2D representation. Volume estimation on simulating phantoms for parallel cutting, fan cutting and random cuttings is provided. The experimental results show that the 2D representation generated by the fast mapping algorithm is highly efficient with less than 0.001 ms for 100 cross-sections. Quantitative comparisons show that the proposed interpolation method can approximate the original volume more precisely as compared to the Catmull-Rom (CR) spline, especially in the case of small number of cross-sections. In all cases, our approach can obtain accurate results at an error of less than 2% for ten cross-sections. Additionally, volume estimation on a high intensity focused ultrasound (HIFU) lesion based on linear B-scan and rotational B-scan sequential images are also performed. The experiments show that the proposed approach is promising and may have potential in clinical applications.
Photogrammetry and Laser Imagery Tests for Tank Waste Volume Estimates: Summary Report
Field, Jim G.
2013-03-27
Feasibility tests were conducted using photogrammetry and laser technologies to estimate the volume of waste in a tank. These technologies were compared with video Camera/CAD Modeling System (CCMS) estimates; the current method used for post-retrieval waste volume estimates. This report summarizes test results and presents recommendations for further development and deployment of technologies to provide more accurate and faster waste volume estimates in support of tank retrieval and closure.
Estimation of bone permeability using accurate microstructural measurements.
Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P
2006-01-01
While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.
CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY
Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.
2003-02-27
The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume
Be the Volume: A Classroom Activity to Visualize Volume Estimation
ERIC Educational Resources Information Center
Mikhaylov, Jessica
2011-01-01
A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
NASA Astrophysics Data System (ADS)
van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.
2016-03-01
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
How accurate are physical property estimation programs for organosilicon compounds?
Boethling, Robert; Meylan, William
2013-11-01
Organosilicon compounds are important in chemistry and commerce, and nearly 10% of new chemical substances for which premanufacture notifications are processed by the US Environmental Protection Agency (USEPA) contain silicon (Si). Yet, remarkably few measured values are submitted for key physical properties, and the accuracy of estimation programs such as the Estimation Programs Interface (EPI) Suite and the SPARC Performs Automated Reasoning in Chemistry (SPARC) system is largely unknown. To address this issue, the authors developed an extensive database of measured property values for organic compounds containing Si and evaluated the performance of no-cost estimation programs for several properties of importance in environmental assessment. These included melting point (mp), boiling point (bp), vapor pressure (vp), water solubility, n-octanol/water partition coefficient (log KOW ), and Henry's law constant. For bp and the larger of 2 vp datasets, SPARC, MPBPWIN, and the USEPA's Toxicity Estimation Software Tool (TEST) had similar accuracy. For log KOW and water solubility, the authors tested 11 and 6 no-cost estimators, respectively. The best performers were Molinspiration and WSKOWWIN, respectively. The TEST's consensus mp method outperformed that of MPBPWIN by a considerable margin. Generally, the best programs estimated the listed properties of diverse organosilicon compounds with accuracy sufficient for chemical screening. The results also highlight areas where improvement is most needed.
Pomposelli, James J; Tongyoo, Assanee; Wald, Christoph; Pomfret, Elizabeth A
2012-09-01
The estimation of the standard liver volume (SLV) is an important component of the evaluation of potential living liver donors and the surgical planning for resection for tumors. At least 16 different formulas for estimating SLV have been published in the worldwide literature. More recently, several proprietary software-assisted image postprocessing (SAIP) programs have been developed to provide accurate volume measurements based on the actual anatomy of a specific patient. Using SAIP, we measured SLV in 375 healthy potential liver donors and compared the results to SLV values that were estimated with the previously published formulas and each donor's demographic and anthropomorphic data. The percentage errors of the 16 SLV formulas versus SAIP varied by more than 59% (from -21.6% to +37.7%). One formula was not statistically different from SAIP with respect to the percentage error (-1.2%), and another formula was not statistically different with respect to the absolute liver volume (18 mL). More than 75% of the estimated SLV values produced by these 2 formulas had percentage errors within ±15%, and the formulas provided good predictions within acceptable agreement (±15%) on scatter plots. Because of the wide variability, care must be taken when a formula is being chosen for estimating SLV, but the 2 aforementioned formulas provided the most accurate results with our patient demographics.
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
Accurate tempo estimation based on harmonic + noise decomposition
NASA Astrophysics Data System (ADS)
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Fast and Accurate Estimates of Divergence Times from Big Data.
Mello, Beatriz; Tao, Qiqing; Tamura, Koichiro; Kumar, Sudhir
2017-01-01
Ongoing advances in sequencing technology have led to an explosive expansion in the molecular data available for building increasingly larger and more comprehensive timetrees. However, Bayesian relaxed-clock approaches frequently used to infer these timetrees impose a large computational burden and discourage critical assessment of the robustness of inferred times to model assumptions, influence of calibrations, and selection of optimal data subsets. We analyzed eight large, recently published, empirical datasets to compare time estimates produced by RelTime (a non-Bayesian method) with those reported by using Bayesian approaches. We find that RelTime estimates are very similar to Bayesian approaches, yet RelTime requires orders of magnitude less computational time. This means that the use of RelTime will enable greater rigor in molecular dating, because faster computational speeds encourage more extensive testing of the robustness of inferred timetrees to prior assumptions (models and calibrations) and data subsets. Thus, RelTime provides a reliable and computationally thrifty approach for dating the tree of life using large-scale molecular datasets.
Can student health professionals accurately estimate alcohol content in commonly occurring drinks?
Sinclair, Julia; Searle, Emma
2016-01-01
Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Bioaccessibility tests accurately estimate bioavailability of lead to quail.
Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S
2016-09-01
Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of
Hatt, Mathieu; Cheze le Rest, Catherine; Descourt, Patrice; Dekker, Andre; De Ruysscher, Dirk; Oellers, Michel; Lambin, Philippe; Pradier, Olivier; Visvikis, Dimitris
2010-05-01
Purpose: Accurate contouring of positron emission tomography (PET) functional volumes is now considered crucial in image-guided radiotherapy and other oncology applications because the use of functional imaging allows for biological target definition. In addition, the definition of variable uptake regions within the tumor itself may facilitate dose painting for dosimetry optimization. Methods and Materials: Current state-of-the-art algorithms for functional volume segmentation use adaptive thresholding. We developed an approach called fuzzy locally adaptive Bayesian (FLAB), validated on homogeneous objects, and then improved it by allowing the use of up to three tumor classes for the delineation of inhomogeneous tumors (3-FLAB). Simulated and real tumors with histology data containing homogeneous and heterogeneous activity distributions were used to assess the algorithm's accuracy. Results: The new 3-FLAB algorithm is able to extract the overall tumor from the background tissues and delineate variable uptake regions within the tumors, with higher accuracy and robustness compared with adaptive threshold (T{sub bckg}) and fuzzy C-means (FCM). 3-FLAB performed with a mean classification error of less than 9% +- 8% on the simulated tumors, whereas binary-only implementation led to errors of 15% +- 11%. T{sub bckg} and FCM led to mean errors of 20% +- 12% and 17% +- 14%, respectively. 3-FLAB also led to more robust estimation of the maximum diameters of tumors with histology measurements, with <6% standard deviation, whereas binary FLAB, T{sub bckg} and FCM lead to 10%, 12%, and 13%, respectively. Conclusion: These encouraging results warrant further investigation in future studies that will investigate the impact of 3-FLAB in radiotherapy treatment planning, diagnosis, and therapy response evaluation.
[Definition of accurate planning target volume margins for esophageal cancer radiotherapy].
Lesueur, P; Servagi-Vernat, S
2016-10-01
More than 4000 cases of esophagus neoplasms are diagnosed every year in France. Radiotherapy, which can be delivered in preoperative or exclusive with a concomitant chemotherapy, plays a central role in treatment of esophagus cancer. Even if efficacy of radiotherapy no longer has to be proved, the prognosis of esophagus cancer remains unfortunately poor with a high recurrence rate. Toxicity of esophageal radiotherapy is correlated with the irradiation volume, and limits dose escalation and local control. Esophagus is a deep thoracic organ, which undergoes cardiac and respiratory motion, making the radiotherapy delivery more difficult and increasing the planning target volume margins. Definition of accurate planning target volume margins, taking into account the esophagus' intrafraction motion and set up margins is very important to be sure to cover the clinical target volume and restrains acute and late radiotoxicity. In this article, based on a review of the literature, we propose planning target volume margins adapted to esophageal radiotherapy.
Ferguson, R.B.; Baldwin, V.C.
1995-09-01
Estimating tree and stand volume in mature plantations is time consuming, involving much manpower and equipment; however, several sampling and volume-prediction techniques are available. This study showed that a well-constructed, volume-equation method yields estimates comparable to those of the often more time-consuming, hight-accumulation method, even though the latter should be more accurate for any individual tree. Plot volumes were estimated by both methods in a remeasurement of trees in a 40-plot, planted slash pine thinning study. The mean percent age difference in total volume, inside bark, between the two methods ranged from 1 to 2.5 percent across all the plots; differences outside bark ranged from 7 to 10 percent. The results were similar when the effecs of site, plot mean values, or tree-by-tree comparisons were incorporated.
A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Hsu, Andrew T.
1989-01-01
A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.
Acoustic source inversion to estimate volume flux from volcanic explosions
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Fee, David; Yokoo, Akihiko; Lees, Jonathan M.
2015-07-01
We present an acoustic waveform inversion technique for infrasound data to estimate volume fluxes from volcanic eruptions. Previous inversion techniques have been limited by the use of a 1-D Green's function in a free space or half space, which depends only on the source-receiver distance and neglects volcanic topography. Our method exploits full 3-D Green's functions computed by a numerical method that takes into account realistic topographic scattering. We apply this method to vulcanian eruptions at Sakurajima Volcano, Japan. Our inversion results produce excellent waveform fits to field observations and demonstrate that full 3-D Green's functions are necessary for accurate volume flux inversion. Conventional inversions without consideration of topographic propagation effects may lead to large errors in the source parameter estimate. The presented inversion technique will substantially improve the accuracy of eruption source parameter estimation (cf. mass eruption rate) during volcanic eruptions and provide critical constraints for volcanic eruption dynamics and ash dispersal forecasting for aviation safety. Application of this approach to chemical and nuclear explosions will also provide valuable source information (e.g., the amount of energy released) previously unavailable.
Using GIS to Estimate Lake Volume from Limited Data
Estimates of lake volume are necessary for estimating residence time or modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of ...
Browning, Sharon R; Browning, Brian L
2015-09-03
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package.
Browning, Sharon R.; Browning, Brian L.
2015-01-01
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365
LSimpute: accurate estimation of missing values in microarray data with least squares methods.
Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge
2004-02-20
Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as
Mathew, S; Horne, A W; Murray, L S; Tydeman, G; McKinley, C A
2007-08-01
Real-time ultrasound and portable bladder scanners are commonly used instead of catheterisation to determine bladder volumes in postnatal women but it is not known whether these are accurate. Change in bladder volumes measured by ultrasound and portable scanners were compared with actual voided volume (VV) in 100 postnatal women. The VV was on average 41 ml (CI 29 - 54 ml) higher than that measured by ultrasound, and 33 ml (CI 17 - 48 ml) higher than that measured by portable scanners. Portable scanner volumes were 9 ml (CI -8 - 26 ml) higher than those measured by ultrasound. Neither method is an accurate tool for detecting bladder volume in postnatal women.
NASA Astrophysics Data System (ADS)
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m‑3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m‑3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.
ERIC Educational Resources Information Center
Algina, James; Moulder, Bradley C.; Moser, Barry K.
2002-01-01
Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…
Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?
Hey, Spencer Phillips; Kimmelman, Jonathan
2016-10-01
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.
Using Photogrammetry to Estimate Tank Waste Volumes from Video
Field, Jim G.
2013-03-27
Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, B.; Hut, R.; Van De Giesen, N.
2012-12-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the $150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
Accurately measuring volume of soil samples using low cost Kinect 3D scanner
NASA Astrophysics Data System (ADS)
van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick
2013-04-01
The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.
Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro
2016-01-12
The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.
[Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER Statement].
Stevens, Gretchen A; Alkema, Leontine; Black, Robert E; Boerma, J Ties; Collins, Gary S; Ezzati, Majid; Grove, John T; Hogan, Daniel R; Hogan, Margaret C; Horton, Richard; Lawn, Joy E; Marušic, Ana; Mathers, Colin D; Murray, Christopher J L; Rudan, Igor; Salomon, Joshua A; Simpson, Paul J; Vos, Theo; Welch, Vivian
2017-01-01
Measurements of health indicators are rarely available for every population and period of interest, and available data may not be comparable. The Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER) define best reporting practices for studies that calculate health estimates for multiple populations (in time or space) using multiple information sources. Health estimates that fall within the scope of GATHER include all quantitative population-level estimates (including global, regional, national, or subnational estimates) of health indicators, including indicators of health status, incidence and prevalence of diseases, injuries, and disability and functioning; and indicators of health determinants, including health behaviours and health exposures. GATHER comprises a checklist of 18 items that are essential for best reporting practice. A more detailed explanation and elaboration document, describing the interpretation and rationale of each reporting item along with examples of good reporting, is available on the GATHER website (http://gather-statement.org).
Paterson, Nicholas R.; Lavallée, Luke T.; Nguyen, Laura N.; Witiuk, Kelsey; Ross, James; Mallick, Ranjeeta; Shabana, Wael; MacDonald, Blair; Scheida, Nicola; Fergusson, Dean; Momoli, Franco; Cnossen, Sonya; Morash, Christopher; Cagiannos, Ilias; Breau, Rodney H.
2016-01-01
Introduction: We sought to evaluate the accuracy of prostate volume estimates in patients who received both a preoperative transrectal ultrasound (TRUS) and magnetic resonance imaging (MRI) in relation to the referent pathological specimen post-radical prostatectomy. Methods: Patients receiving both TRUS and MRI prior to radical prostatectomy at one academic institution were retrospectively analyzed. TRUS and MRI volumes were estimated using the prolate ellipsoid formula. TRUS volumes were collected from sonography reports. MRI volumes were estimated by two blinded raters and the mean of the two was used for analyses. Pathological volume was calculated using a standard fluid displacement method. Results: Three hundred and eighteen (318) patients were included in the analysis. MRI was slightly more accurate than TRUS based on interclass correlation (0.83 vs. 0.74) and absolute risk bias (higher proportion of estimates within 5, 10, and 20 cc of pathological volume). For TRUS, 87 of 298 (29.2%) prostates without median lobes differed by >10 cc of specimen volume and 22 of 298 (7.4%) differed by >20 cc. For MRI, 68 of 298 (22.8%) prostates without median lobes differed by >10 cc of specimen volume, while only 4 of 298 (1.3%) differed by >20 cc. Conclusions: MRI and TRUS prostate volume estimates are consistent with pathological volumes along the prostate size spectrum. MRI demonstrated better correlation with prostatectomy specimen volume in most patients and may be better suited in cases where TRUS and MRI estimates are disparate. Validation of these findings with prospective, standardized ultrasound techniques would be helpful. PMID:27878049
NASA Astrophysics Data System (ADS)
Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Robust and accurate fundamental frequency estimation based on dominant harmonic components.
Nakatani, Tomohiro; Irino, Toshio
2004-12-01
This paper presents a new method for robust and accurate fundamental frequency (F0) estimation in the presence of background noise and spectral distortion. Degree of dominance and dominance spectrum are defined based on instantaneous frequencies. The degree of dominance allows one to evaluate the magnitude of individual harmonic components of the speech signals relative to background noise while reducing the influence of spectral distortion. The fundamental frequency is more accurately estimated from reliable harmonic components which are easy to select given the dominance spectra. Experiments are performed using white and babble background noise with and without spectral distortion as produced by a SRAEN filter. The results show that the present method is better than previously reported methods in terms of both gross and fine F0 errors.
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Peng, Hao; Gu, Xianfeng; Barish, Mathew; Kaufman, Arie
2016-03-01
Accurate estimation of splenic volume is crucial for the determination of disease progression and response to treatment for diseases that result in enlargement of the spleen. However, there is no consensus with respect to the use of single or multiple one-dimensional, or volumetric measurement. Existing methods for human reviewers focus on measurement of cross diameters on a representative axial slice and craniocaudal length of the organ. We propose two heuristics for the selection of the optimal axial plane for splenic volume estimation: the maximal area axial measurement heuristic and the novel conformal welding shape-based heuristic. We evaluate these heuristics on time-variant data derived from both healthy and sick subjects and contrast them to established heuristics. Under certain conditions our heuristics are superior to standard practice volumetric estimation methods. We conclude by providing guidance on selecting the optimal heuristic for splenic volume estimation.
Estimating Residual Solids Volume In Underground Storage Tanks
Clark, Jason L.; Worthy, S. Jason; Martin, Bruce A.; Tihey, John R.
2014-01-08
The Savannah River Site liquid waste system consists of multiple facilities to safely receive and store legacy radioactive waste, treat, and permanently dispose waste. The large underground storage tanks and associated equipment, known as the 'tank farms', include a complex interconnected transfer system which includes underground transfer pipelines and ancillary equipment to direct the flow of waste. The waste in the tanks is present in three forms: supernatant, sludge, and salt. The supernatant is a multi-component aqueous mixture, while sludge is a gel-like substance which consists of insoluble solids and entrapped supernatant. The waste from these tanks is retrieved and treated as sludge or salt. The high level (radioactive) fraction of the waste is vitrified into a glass waste form, while the low-level waste is immobilized in a cementitious grout waste form called saltstone. Once the waste is retrieved and processed, the tanks are closed via removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations and severing/sealing external penetrations. The comprehensive liquid waste disposition system, currently managed by Savannah River Remediation, consists of 1) safe storage and retrieval of the waste as it is prepared for permanent disposition; (2) definition of the waste processing techniques utilized to separate the high-level waste fraction/low-level waste fraction; (3) disposition of LLW in saltstone; (4) disposition of the HLW in glass; and (5) closure state of the facilities, including tanks. This paper focuses on determining the effectiveness of waste removal campaigns through monitoring the volume of residual solids in the waste tanks. Volume estimates of the residual solids are performed by creating a map of the residual solids on the waste tank bottom using video and still digital images. The map is then used to calculate the volume of solids remaining in the waste tank. The ability to
Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude
2009-12-01
TRACKER SYSTEM FOR ACCURATE ESTIMATION OF SPACECRAFT ATTITUDE by Jack A. Tappe December 2009 Thesis Co-Advisors: Jae Jun Kim Brij N... Brij N. Agrawal Co-Advisor Dr. Knox T. Millsaps Chairman, Department of Mechanical and Astronautical Engineering iv THIS PAGE...much with my studies here. I would like to especially thank Professors Barry Leonard, Brij Agrawal, Grand Master Shin, and Comrade Oleg Yakimenko
Tuck, L.K.; Pearson, Daniel K.; Cannon, M.R.; Dutton, DeAnn M.
2013-01-01
The Tongue River Member of the Tertiary Fort Union Formation is the primary source of groundwater in the Northern Cheyenne Indian Reservation in southeastern Montana. Coal beds within this formation generally contain the most laterally extensive aquifers in much of the reservation. The U.S. Geological Survey, in cooperation with the Northern Cheyenne Tribe, conducted a study to estimate the volume of water in five coal aquifers. This report presents estimates of the volume of water in five coal aquifers in the eastern and southern parts of the Northern Cheyenne Indian Reservation: the Canyon, Wall, Pawnee, Knobloch, and Flowers-Goodale coal beds in the Tongue River Member of the Tertiary Fort Union Formation. Only conservative estimates of the volume of water in these coal aquifers are presented. The volume of water in the Canyon coal was estimated to range from about 10,400 acre-feet (75 percent saturated) to 3,450 acre-feet (25 percent saturated). The volume of water in the Wall coal was estimated to range from about 14,200 acre-feet (100 percent saturated) to 3,560 acre-feet (25 percent saturated). The volume of water in the Pawnee coal was estimated to range from about 9,440 acre-feet (100 percent saturated) to 2,360 acre-feet (25 percent saturated). The volume of water in the Knobloch coal was estimated to range from about 38,700 acre-feet (100 percent saturated) to 9,680 acre-feet (25 percent saturated). The volume of water in the Flowers-Goodale coal was estimated to be about 35,800 acre-feet (100 percent saturated). Sufficient data are needed to accurately characterize coal-bed horizontal and vertical variability, which is highly complex both locally and regionally. Where data points are widely spaced, the reliability of estimates of the volume of coal beds is decreased. Additionally, reliable estimates of the volume of water in coal aquifers depend heavily on data about water levels and data about coal-aquifer characteristics. Because the data needed to
Estimating exercise stroke volume from asymptotic oxygen pulse in humans.
Whipp, B J; Higgenbotham, M B; Cobb, F C
1996-12-01
Noninvasive techniques have been devised to estimate cardiac output (Q) during exercise to obviate vascular cannulation. However, although these techniques are noninvasive, they are commonly not nonintrusive to subjects' spontaneous ventilation and gas-exchange responses. We hypothesized that the exercise stroke volume (SV) and, hence, Q might be accurately estimated simply from the response pattern of two standardly determined variables: O2 uptake (VO2) and heart rate (HR). Central to the theory is the demonstration that the product of Q and mixed venous O2 content is virtually constant (k) during steady-state exercise. Thus from the Fick equation, VO2 = Q.CaCO2-k, where CaCO2 is the arterial CO2 content, the O2 pulse (O2-P) equals SV.CaCO2-(k/HR). Because the arterial O2 content (CaO2) is usually relatively constant in normal subjects during exercise, O2-P should change hyperbolically with HR, asymptoting at SV.CaO2. In addition, because the asymptotic O2-P equals the slope (S) of the linear O2-HR relationship, exercise SV may be predicted as S/CaO2. We tested this prediction in 23 normal subjects who underwent a 3-min incremental cycle-ergometer test with direct determination of CaO2 and mixed venous O2 content from indwelling catheters. The predicted SV closely reflected the measured value (r = 0.80). We therefore conclude that, in normal subjects, exercise SV may be estimated simply as five times S of the linear VO2-HR relationship (where 5 is approximately 1/CaO2).
NASA Astrophysics Data System (ADS)
Moskalensky, Alexander E.; Yurkin, Maxim A.; Konokhova, Anastasiya I.; Strokotov, Dmitry I.; Nekrasov, Vyacheslav M.; Chernyshev, Andrei V.; Tsvetovskaya, Galina A.; Chikova, Elena D.; Maltsev, Valeri P.
2013-01-01
We introduce a novel approach for determination of volume and shape of individual blood platelets modeled as an oblate spheroid from angle-resolved light scattering with flow-cytometric technique. The light-scattering profiles (LSPs) of individual platelets were measured with the scanning flow cytometer and the platelet characteristics were determined from the solution of the inverse light-scattering problem using the precomputed database of theoretical LSPs. We revealed a phenomenon of parameter compensation, which is partly explained in the framework of anomalous diffraction approximation. To overcome this problem, additional a priori information on the platelet refractive index was used. It allowed us to determine the size of each platelet with subdiffraction precision and independent of the particular value of the platelet aspect ratio. The shape (spheroidal aspect ratio) distributions of platelets showed substantial differences between native and activated by 10 μM adenosine diphosphate samples. We expect that the new approach may find use in hematological analyzers for accurate measurement of platelet volume distribution and for determination of the platelet activation efficiency.
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong
2016-01-01
This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469
Trabant, Dennis C.
1999-01-01
The volume of four of the largest glaciers on Iliamna Volcano was estimated using the volume model developed for evaluating glacier volumes on Redoubt Volcano. The volume model is controlled by simulated valley cross sections that are constructed by fitting third-order polynomials to the shape of the valley walls exposed above the glacier surface. Critical cross sections were field checked by sounding with ice-penetrating radar during July 1998. The estimated volumes of perennial snow and glacier ice for Tuxedni, Lateral, Red, and Umbrella Glaciers are 8.6, 0.85, 4.7, and 0.60 cubic kilometers respectively. The estimated volume of snow and ice on the upper 1,000 meters of the volcano is about 1 cubic kilometer. The volume estimates are thought to have errors of no more than ?25 percent. The volumes estimated for the four largest glaciers are more than three times the total volume of snow and ice on Mount Rainier and about 82 times the total volume of snow and ice that was on Mount St. Helens before its May 18, 1980 eruption. Volcanoes mantled by substantial snow and ice covers have produced the largest and most catastrophic lahars and floods. Therefore, it is prudent to expect that, during an eruptive episode, flooding and lahars threaten all of the drainages heading on Iliamna Volcano. On the other hand, debris avalanches can happen any time. Fortunately, their influence is generally limited to the area within a few kilometers of the summit.
Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan
2015-08-11
Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Budget estimates fiscal year 1995: Volume 10
Not Available
1994-02-01
This report contains the Nuclear Regulatory Commission (NRC) fiscal year budget justification to Congress. The budget provides estimates for salaries and expenses and for the Office of the Inspector General for fiscal year 1995. The NRC 1995 budget request is $546,497,000. This is an increase of $11,497,000 above the proposed level for FY 1994. The NRC FY 1995 budget request is 3,218 FTEs. This is a decrease of 75 FTEs below the 1994 proposed level.
Accurate estimation of object location in an image sequence using helicopter flight data
NASA Technical Reports Server (NTRS)
Tang, Yuan-Liang; Kasturi, Rangachar
1994-01-01
In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images
Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785
Contaminated Soil Volume Estimation at the Maywood Site - 12292
Johnson, Robert; Quinn, John; Durham, Lisa; Moore, James; Hays, David
2012-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program properties, Argonne National Laboratory assisted the U.S. Army Corps of Engineers (USACE) New York District in revising contaminated soil volume estimates for the remaining areas of the Stepan/Sears properties that require soil remediation. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations; both sampling results and down-hole gamma data were coded to identify whether the results indicated the presence of contamination above site cleanup requirements. Significant effort was invested in developing complete electronic data sets for the site by incorporating data contained in various scanned documents, maps, etc. The updating process yielded both a best guess estimate of contamination volumes and upper and lower bounds on the volume estimate that reflected the estimate's uncertainty. The site-wide contaminated volume estimate (with associated uncertainty) was adjusted to reflect areas where remediation was complete; the result was a revised estimate of the remaining soil volumes requiring remediation that the USACE could use for planning. Other environmental projects may benefit from this process for estimating the volume of contaminated soil. A comparison of sample and DHG results for various stations with the site ICSM provides
Accurate estimates of age at maturity from the growth trajectories of fishes and other ectotherms.
Honsey, Andrew E; Staples, David F; Venturelli, Paul A
2017-01-01
Age at maturity (AAM) is a key life history trait that provides insight into ecology, evolution, and population dynamics. However, maturity data can be costly to collect or may not be available. Life history theory suggests that growth is biphasic for many organisms, with a change-point in growth occurring at maturity. If so, then it should be possible to use a biphasic growth model to estimate AAM from growth data. To test this prediction, we used the Lester biphasic growth model in a likelihood profiling framework to estimate AAM from length at age data. We fit our model to simulated growth trajectories to determine minimum data requirements (in terms of sample size, precision in length at age, and the cost to somatic growth of maturity) for accurate AAM estimates. We then applied our method to a large walleye Sander vitreus data set and show that our AAM estimates are in close agreement with conventional estimates when our model fits well. Finally, we highlight the potential of our method by applying it to length at age data for a variety of ectotherms. Our method shows promise as a tool for estimating AAM and other life history traits from contemporary and historical samples.
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2008-11-30
Prediction of the microbial growth rate as a response to changing temperatures is an important aspect in the control of food safety and food spoilage. Accurate model predictions of the microbial evolution ask for correct model structures and reliable parameter values with good statistical quality. Given the widely accepted validity of the Cardinal Temperature Model with Inflection (CTMI) [Rosso, L., Lobry, J. R., Bajard, S. and Flandrois, J. P., 1995. Convenient model to describe the combined effects of temperature and pH on microbial growth, Applied and Environmental Microbiology, 61: 610-616], this paper focuses on the accurate estimation of its four parameters (T(min), T(opt), T(max) and micro(opt)) by applying the technique of optimal experiment design for parameter estimation (OED/PE). This secondary model describes the influence of temperature on the microbial specific growth rate from the minimum to the maximum temperature for growth. Dynamic temperature profiles are optimized within two temperature regions ([15 degrees C, 43 degrees C] and [15 degrees C, 45 degrees C]), focusing on the minimization of the parameter estimation (co)variance (D-optimal design). The optimal temperature profiles are implemented in a computer controlled bioreactor, and the CTMI parameters are identified from the resulting experimental data. Approximately equal CTMI parameter values were derived irrespective of the temperature region, except for T(max). The latter could only be estimated accurately from the optimal experiments within [15 degrees C, 45 degrees C]. This observation underlines the importance of selecting the upper temperature constraint for OED/PE as close as possible to the true T(max). Cardinal temperature estimates resulting from designs within [15 degrees C, 45 degrees C] correspond with values found in literature, are characterized by a small uncertainty error and yield a good result during validation. As compared to estimates from non-optimized dynamic
Rashid, Mamoon; Pain, Arnab
2013-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Role of the Molar Volume on Estimated Diffusion Coefficients
NASA Astrophysics Data System (ADS)
Santra, Sangeeta; Paul, Aloke
2015-09-01
The role of the molar volume on the estimated diffusion parameters has been speculated for decades. The Matano-Boltzmann method was the first to be developed for the estimation of the variation of the interdiffusion coefficients with composition. However, this could be used only when the molar volume varies ideally or remains constant. Although there are no such systems, this method is still being used to consider the ideal variation. More efficient methods were developed by Sauer-Freise, Den Broeder, and Wagner to tackle this problem. However, there is a lack of research indicating the most efficient method. We have shown that Wagner's method is the most suitable one when the molar volume deviates from the ideal value. Similarly, there are two methods for the estimation of the ratio of intrinsic diffusion coefficients at the Kirkendall marker plane proposed by Heumann and van Loo. The Heumann method, like the Matano-Boltzmann method, is suitable to use only when the molar volume varies more or less ideally or remains constant. In most of the real systems, where molar volume deviates from the ideality, it is safe to use the van Loo method. We have shown that the Heumann method introduces large errors even for a very small deviation of the molar volume from the ideal value. On the other hand, the van Loo method is relatively less sensitive to it. Overall, the estimation of the intrinsic diffusion coefficient is more sensitive than the interdiffusion coefficient.
A time-accurate finite volume method valid at all flow velocities
NASA Astrophysics Data System (ADS)
Kim, S.-W.
1993-07-01
A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil
The estimation of tumor cell percentage for molecular testing by pathologists is not accurate.
Smits, Alexander J J; Kummer, J Alain; de Bruin, Peter C; Bol, Mijke; van den Tweel, Jan G; Seldenrijk, Kees A; Willems, Stefan M; Offerhaus, G Johan A; de Weger, Roel A; van Diest, Paul J; Vink, Aryan
2014-02-01
Molecular pathology is becoming more and more important in present day pathology. A major challenge for any molecular test is its ability to reliably detect mutations in samples consisting of mixtures of tumor cells and normal cells, especially when the tumor content is low. The minimum percentage of tumor cells required to detect genetic abnormalities is a major variable. Information on tumor cell percentage is essential for a correct interpretation of the result. In daily practice, the percentage of tumor cells is estimated by pathologists on hematoxylin and eosin (H&E)-stained slides, the reliability of which has been questioned. This study aimed to determine the reliability of estimated tumor cell percentages in tissue samples by pathologists. On 47 H&E-stained slides of lung tumors a tumor area was marked. The percentage of tumor cells within this area was estimated independently by nine pathologists, using categories of 0-5%, 6-10%, 11-20%, 21-30%, and so on, until 91-100%. As gold standard, the percentage of tumor cells was counted manually. On average, the range between the lowest and the highest estimate per sample was 6.3 categories. In 33% of estimates, the deviation from the gold standard was at least three categories. The mean absolute deviation was 2.0 categories (range between observers 1.5-3.1 categories). There was a significant difference between the observers (P<0.001). If 20% of tumor cells were considered the lower limit to detect a mutation, samples with an insufficient tumor cell percentage (<20%) would have been estimated to contain enough tumor cells in 27/72 (38%) observations, possibly causing false negative results. In conclusion, estimates of tumor cell percentages on H&E-stained slides are not accurate, which could result in misinterpretation of test results. Reliability could possibly be improved by using a training set with feedback.
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-07
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.
Lake, Douglas E; Moorman, J Randall
2011-01-01
Entropy estimation is useful but difficult in short time series. For example, automated detection of atrial fibrillation (AF) in very short heart beat interval time series would be useful in patients with cardiac implantable electronic devices that record only from the ventricle. Such devices require efficient algorithms, and the clinical situation demands accuracy. Toward these ends, we optimized the sample entropy measure, which reports the probability that short templates will match with others within the series. We developed general methods for the rational selection of the template length m and the tolerance matching r. The major innovation was to allow r to vary so that sufficient matches are found for confident entropy estimation, with conversion of the final probability to a density by dividing by the matching region volume, 2r(m). The optimized sample entropy estimate and the mean heart beat interval each contributed to accurate detection of AF in as few as 12 heartbeats. The final algorithm, called the coefficient of sample entropy (COSEn), was developed using the canonical MIT-BIH database and validated in a new and much larger set of consecutive Holter monitor recordings from the University of Virginia. In patients over the age of 40 yr old, COSEn has high degrees of accuracy in distinguishing AF from normal sinus rhythm in 12-beat calculations performed hourly. The most common errors are atrial or ventricular ectopy, which increase entropy despite sinus rhythm, and atrial flutter, which can have low or high entropy states depending on dynamics of atrioventricular conduction.
Rogers, Kevin J; Finn, Anthony
2017-02-01
Acoustic atmospheric tomography calculates temperature and wind velocity fields in a slice or volume of atmosphere based on travel time estimates between strategically located sources and receivers. The technique discussed in this paper uses the natural acoustic signature of an unmanned aerial vehicle as it overflies an array of microphones on the ground. The sound emitted by the aircraft is recorded on-board and by the ground microphones. The group velocities of the intersecting sound rays are then derived by comparing these measurements. Tomographic inversion is used to estimate the temperature and wind fields from the group velocity measurements. This paper describes a technique for deriving travel time (and hence group velocity) with an accuracy of 0.1% using these assets. This is shown to be sufficient to obtain highly plausible tomographic inversion results that correlate well with independent SODAR measurements.
Lamb mode selection for accurate wall loss estimation via guided wave tomography
Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.
2014-02-18
Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.
Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo
2017-03-01
The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.
NASA Astrophysics Data System (ADS)
Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien
2017-02-01
This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.
Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet
2016-05-01
Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
NASA Astrophysics Data System (ADS)
Granata, Daniele; Carnevale, Vincenzo
2016-08-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
MIDAS robust trend estimator for accurate GPS station velocities without step detection.
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi )/(tj-ti ) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140
Estimating carbon stocks based on forest volume-age relationship
NASA Astrophysics Data System (ADS)
Hangnan, Y.; Lee, W.; Son, Y.; Kwak, D.; Nam, K.; Moonil, K.; Taesung, K.
2012-12-01
This research attempted to estimate potential change of forest carbon stocks between 2010 and 2110 in South Korea, using the forest cover map and National Forest Inventory (NFI) data. Allometric functions (logistic regression models) of volume-age relationships were developed to estimate carbon stock change during upcoming 100 years for Pinus densiflora, Pinus koraiensis, Pinus rigida, Larix kaempferi,and Quercus spp. The current forest volume was estimated with the developed regression model and 4th forest cover map. The future volume was predicted by developed volume-age models with adding n years to current age. As a result, we found that the total forest volume would increase from 126.89 m^3/ha to 246.61 m^3/ha and the carbon stocks would increase from 90.55 Mg C ha^(-1) to 174.62 Mg C ha^(-1) during 100 years when current forest remains unchanged. The carbon stocks would increase by approximately 0.84 Mg C ha^(-1) yr^(-1), which has high value if considering other northern countries' (Canada, Russia, China) -0.10 ~ 0.28 Mg C ha^(-1) yr^(-1) in pervious study. This can be attributed to the fact that mixed forest and bamboo forest in this study did not considered. Moreover, it must be influenced by that the change of carbon stocks was estimated without the consideration of mortality, thinning, and tree species' change in this study. ;
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
Soil volume estimation in debris flow areas using lidar data in the 2014 Hiroshima, Japan rainstorm
NASA Astrophysics Data System (ADS)
Miura, H.
2015-10-01
Debris flows triggered by the rainstorm in Hiroshima, Japan on August 20th, 2014 produced extensive damage to the built-up areas in the northern part of Hiroshima city. In order to consider various emergency response activities and early-stage recovery planning, it is important to evaluate the distribution of the soil volumes in the debris flow areas immediately after the disaster. In this study, automated nonlinear mapping technique is applied to light detection and ranging (LiDAR)-derived digital elevation models (DEMs) observed before and after the disaster to quickly and accurately correct geometric locational errors of the data. The soil volumes generated from the debris flows are estimated by subtracting the pre- and post-event DEMs. The geomorphologic characteristics in the debris flow areas are discussed from the distribution of the estimated soil volumes.
Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A
2016-05-01
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.
Accurate estimation of the RMS emittance from single current amplifier data
Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.
2002-05-31
This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera
NASA Astrophysics Data System (ADS)
Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi
In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.
Estimation of myocardial volume at risk from CT angiography
NASA Astrophysics Data System (ADS)
Zhu, Liangjia; Gao, Yi; Mohan, Vandana; Stillman, Arthur; Faber, Tracy; Tannenbaum, Allen
2011-03-01
The determination of myocardial volume at risk distal to coronary stenosis provides important information for prognosis and treatment of coronary artery disease. In this paper, we present a novel computational framework for estimating the myocardial volume at risk in computed tomography angiography (CTA) imagery. Initially, epicardial and endocardial surfaces, and coronary arteries are extracted using an active contour method. Then, the extracted coronary arteries are projected onto the epicardial surface, and each point on this surface is associated with its closest coronary artery using the geodesic distance measurement. The likely myocardial region at risk on the epicardial surface caused by a stenosis is approximated by the region in which all its inner points are associated with the sub-branches distal to the stenosis on the coronary artery tree. Finally, the likely myocardial volume at risk is approximated by the volume in between the region at risk on the epicardial surface and its projection on the endocardial surface, which is expected to yield computational savings over risk volume estimation using the entire image volume. Furthermore, we expect increased accuracy since, as compared to prior work using the Euclidean distance, we employ the geodesic distance in this work. The experimental results demonstrate the effectiveness of the proposed approach on pig heart CTA datasets.
Houairi, Kamel; Cassaing, Frédéric
2009-12-01
Two-wavelength interferometry combines measurement at two wavelengths lambda(1) and lambda(2) in order to increase the unambiguous range (UR) for the measurement of an optical path difference. With the usual algorithm, the UR is equal to the synthetic wavelength Lambda=lambda(1)lambda(2)/|lambda(1)-lambda(2)|, and the accuracy is a fraction of Lambda. We propose here a new analytical algorithm based on arithmetic properties, allowing estimation of the absolute fringe order of interference in a noniterative way. This algorithm has nice properties compared with the usual algorithm: it is at least as accurate as the most accurate measurement at one wavelength, whereas the UR is extended to several times the synthetic wavelength. The analysis presented shows how the actual UR depends on the wavelengths and different sources of error. The simulations presented are confirmed by experimental results, showing that the new algorithm has enabled us to reach an UR of 17.3 microm, much larger than the synthetic wavelength, which is only Lambda=2.2 microm. Applications to metrology and fringe tracking are discussed.
Kasabova, Boryana E; Holliday, Trenton W
2015-04-01
A new model for estimating human body surface area and body volume/mass from standard skeletal metrics is presented. This model is then tested against both 1) "independently estimated" body surface areas and "independently estimated" body volume/mass (both derived from anthropometric data) and 2) the cylindrical model of Ruff. The model is found to be more accurate in estimating both body surface area and body volume/mass than the cylindrical model, but it is more accurate in estimating body surface area than it is for estimating body volume/mass (as reflected by the standard error of the estimate when "independently estimated" surface area or volume/mass is regressed on estimates derived from the present model). Two practical applications of the model are tested. In the first test, the relative contribution of the limbs versus the trunk to the body's volume and surface area is compared between "heat-adapted" and "cold-adapted" populations. As expected, the "cold-adapted" group has significantly more of its body surface area and volume in its trunk than does the "heat-adapted" group. In the second test, we evaluate the effect of variation in bi-iliac breadth, elongated or foreshortened limbs, and differences in crural index on the body's surface area to volume ratio (SA:V). Results indicate that the effects of bi-iliac breadth on SA:V are substantial, while those of limb lengths and (especially) the crural index are minor, which suggests that factors other than surface area relative to volume are driving morphological variation and ecogeographical patterning in limb prorportions.
The potential of more accurate InSAR covariance matrix estimation for land cover mapping
NASA Astrophysics Data System (ADS)
Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin
2017-04-01
Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.
Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge
NASA Astrophysics Data System (ADS)
Jacobsen, R. E.; Burr, D. M.
2016-09-01
Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.
NASA Astrophysics Data System (ADS)
Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray
2016-06-01
Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.
A technique for fast and accurate measurement of hand volumes using Archimedes' principle.
Hughes, S; Lau, J
2008-03-01
A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
Estimating flood hydrographs and volumes for Alabama streams
Olin, D.A.; Atkins, J.B.
1988-01-01
The hydraulic design of highway drainage structures involves an evaluation of the effect of the proposed highway structures on lives, property, and stream stability. Flood hydrographs and associated flood volumes are useful tools in evaluating these effects. For design purposes, the Alabama Highway Department needs information on flood hydrographs and volumes associated with flood peaks of specific recurrence intervals (design floods) at proposed or existing bridge crossings. This report will provide the engineer with a method to estimate flood hydrographs, volumes, and lagtimes for rural and urban streams in Alabama with drainage areas less than 500 sq mi. Existing computer programs and methods to estimate flood hydrographs and volumes for ungaged streams have been developed in Georgia. These computer programs and methods were applied to streams in Alabama. The report gives detailed instructions on how to estimate flood hydrographs for ungaged rural or urban streams in Alabama with drainage areas less than 500 sq mi, without significant in-channel storage or regulations. (USGS)
Magma generation on Mars: Estimated volumes through time
NASA Technical Reports Server (NTRS)
Greeley, Ronald; Schneid, B.
1991-01-01
Images of volcanoes and lava flows, chemical analysis by the Viking landers, and studies of meteorites show that volcanism has played an important role in the evolution of Mars. Photogeologic mapping suggests that half of Mars' surface is covered with volcanic materials. Here, researchers present results from new mappings, including estimates of volcanic deposit thicknesses based on partly buried and buried impact craters using the technique of DeHon. The researchers infer the volumes of possible associated plutonic rocks and derive the volumes of magmas on Mars generated in its post-crustal formation history. Also considered is the amount of juvenile water that might have exsolved from the magma through time.
Satellite sensor estimates of Northern Hemisphere snow volume
NASA Technical Reports Server (NTRS)
Chang, A. T. C.; Foster, J. L.; Hall, D. K.
1990-01-01
In the Northern Hemisphere the mean monthly snow-covered area ranges from about 7 percent of the land area in summer to over 40 percent in winter, thus making snow one of the most rapidly varying natural surface features. The mean monthly snow volume ranges from about 1.5 x 10 to the 16th g in summer to about 3.0 x 10 to the 18th g in winter. Currently several algorithms utilizing passive microwave brightness temperatures are available to estimate snow cover and depth. The algorithm presented here uses the difference between the 37-GHz channel and the 18-GHz channel of the SMMR on the Nimbus-7 satellite to derive estimates of snow volume. Even though satellite sensor snow records are currently too short to reveal trends, continued monitoring over about the next 10 years should make it possible to establish whether incipient or current trends are significant in the context of global climate change.
Accurate optical flow field estimation using mechanical properties of soft tissues
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas
2009-02-01
A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.
Bearing Estimation Uncertainties for the Volume Search Sonar
2009-08-31
II bathymetric sidescan sonar system”, IEEE Journal of Oceanic Engineering, 17(3), 239-251, 1992. R. O. Nielsen, “Accuracy of angle estimation with...shows potential for detecting variations in sediment type in much the same manner that a sidescan sonar detects these variations. However, this data...code) 31/08/2009 Final / Technical 2003 - Aug. 2008 Acoustic Signal Processing for Ocean Mapping Applications With The Volume Search Sonar (VSS
Uncertainties in peat volume and soil carbon estimated using ground penetrating radar and probing
Parsekian, Andrew D.; Slater, Lee; Ntarlagiannis, Dimitrios; Nolan, James; Sebestyen, Stephen D; Kolka, Randall K; Hanson, Paul J
2012-01-01
We evaluate the uncertainty in calculations of peat basin volume using high-resolution data . to resolve the three-dimensional structure of a peat basin using both direct (push probes) and indirect geophysical (ground penetrating radar) measurements. We compared volumetric estimates from both approaches with values from literature. We identified subsurface features that can introduce uncertainties into direct peat thickness measurements including the presence of woody peat and soft clay or gyttja. We demonstrate that a simple geophysical technique that is easily scalable to larger peatlands can be used to rapidly and cost effectively obtain more accurate and less uncertain estimates of peat basin volumes critical to improving understanding of the total terrestrial carbon pool in peatlands.
Estimation of liquid volume fraction using ultrasound transit time spectroscopy
NASA Astrophysics Data System (ADS)
Al-Qahtani, Saeed M.; Langton, Christian M.
2016-12-01
It has recently been proposed that the propagation of an ultrasound wave through complex structures, consisting of two-materials of differing ultrasound velocity, may be considered as an array of parallel ‘sonic rays’, the transit time of each determined by their relative proportion; being a minimum (t min) in entire higher velocity material, and a maximum (t max) in entire lower velocity material. An ultrasound transit time spectrum (UTTS) describes the proportion of sonic rays at an individual transit time. It has previously been demonstrated that the solid volume fraction of a solid:liquid composite, specifically acrylic step-wedges immersed in water, may be reliably estimated from the UTTS. The aim of this research was to investigate the hypothesis that the volume fraction of a two-component liquid mixture, of unequal ultrasound velocity, may also be estimated by UTTS. A through-transmission technique incorporating two 1 MHz ultrasound transducers within a horizontally-aligned cylindrical tube-housing was utilised, the proportion of silicone oil to water being varied from 0% to 100%. The liquid volume fraction was estimated from the UTTS at each composition, the coefficient of determination (R 2%) being 98.9 ± 0.7%. The analysis incorporated a novel signal amplitude normalisation technique to compensate for absorption within the silicone oil. It is therefore envisaged that the parallel sonic ray concept and the derived UTTS may be further applied to the quantification of liquid mixture composition assessment.
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
Duan, Xinhui; Wang, Jia; Qu, Mingliang; Leng, Shuai; Liu, Yu; Krambeck, Amy; McCollough, Cynthia
2014-01-01
Purpose We propose a method to improve the accuracy of volume estimation of kidney stones from computerized tomography images. Materials and Methods The proposed method consisted of 2 steps. A threshold equal to the average of the computerized tomography number of the object and the background was first applied to determine full width at half maximum volume. Correction factors were then applied, which were precalculated based on a model of a sphere and a 3-dimensional Gaussian point spread function. The point spread function was measured in a computerized tomography scanner to represent the response of the scanner to a point-like object. Method accuracy was validated using 6 small cylindrical phantoms with 2 volumes of 21.87 and 99.9 mm3, and 3 attenuations, respectively, and 76 kidney stones with a volume range of 6.3 to 317.4 mm3. Volumes estimated by the proposed method were compared with full width at half maximum volumes. Results The proposed method was significantly more accurate than full width at half maximum volume (p <0.0001). The magnitude of improvement depended on stone volume with smaller stones benefiting more from the method. For kidney stones 10 to 20 mm3 in volume the average improvement in accuracy was the greatest at 19.6%. Conclusions The proposed method achieved significantly improved accuracy compared with threshold methods. This may lead to more accurate stone management. PMID:22819107
NASA Astrophysics Data System (ADS)
Mittag, Anja; Lenz, Dominik; Smith, Paul J.; Pach, Susanne; Tarnok, Attila
2005-04-01
Aim: In patients, e.g. with congenital heart diseases, a differential blood count is needed for diagnosis. To this end by standard automatic analyzers 500 μl of blood is required from the patients. In case of newborns and infants this is a substantial volume, especially after operations associated with blood loss. Therefore, aim of this study was to develop a method to determine a differential blood picture with a substantially reduced specimen volume. Methods: To generate a differential blood picture 10 μl EDTA blood were mixed with 10 μl of a DRAQ5 solution (500μM, Biostatus) and 10 μl of an antibody mixture (CD45-FITC, CD14-PE, diluted with PBS). 20 μl of this cell suspension was filled into a Neubauer counting chamber. Due to the defined volume of the chamber it is possible to determine the cell count per volume. The trigger for leukocyte counting was set on DRAQ5 signal in order to be able to distinguish nucleated white blood cells from erythrocytes. Different leukocyte subsets could be distinguished due to the used fluorescence labeled antibodies. For erythrocyte counting cell suspension was diluted another 150 times. 20 μl of this dilution was analyzed in a microchamber by LSC with trigger set on forward scatter signal. Results: This method allows a substantial decrease of blood sample volume for generation of a differential blood picture (10 μl instead of 500μl). There was a high correlation between our method and the results of routine laboratory (r2=0.96, p<0.0001 n=40). For all parameters intra-assay variance was less than 7 %. Conclusions: In patients with low blood volume such as neonates and in critically ill infants every effort has to be taken to reduce the blood volume needed for diagnostics. With this method only 2% of standard sample volume is needed to generate a differential blood picture. Costs are below that of routine laboratory. We suggest this method to be established in paediatric cardiology for routine diagnostics and for
Estimation of the standard molal heat capacities, entropies and volumes of 2:1 clay minerals
NASA Astrophysics Data System (ADS)
Ransom, Barbara; Helgeson, Harold C.
1994-11-01
The dearth of accurate values of the thermodynamic properties of 2:1 clay minerals severely hampers interpretation of their phase relations, the design of critical laboratory experiments and geologically realistic computer calculations of mass transfer in weathering, diagenetic and hydrothermal systems. Algorithms and strategies are described below for estimating to within 2% the standard molal heat capacities, entropies, and volumes of illites, smectites and other 2:1 clay minerals. These techniques can also be used to estimate standard molal thermodynamic properties of fictive endmembers of clay mineral solid solutions. Because 2:1 clay minerals like smectite and vermiculite are always hydrated to some extent in nature, contribution of interlayer H 2O to their thermodynamic properties is considered explicitly in the estimation of the standard molal heat capacities, entropies, and volumes of these minerals. Owing to the lack of accurate calorimetric data from which reliable values of the standard molal heat capacity and entropy of interlayer H 2O can be retrieved, these properties were taken in a first approximation to be equal to those of zeolitic H 2O in analcite. The resulting thermodynamic contributions per mole of interlayer H 2O to the standard molal heat capacity, entropy, and volume of hydrous clay minerals at 1 bar and 25°C are 11.46 cal mol -1, 13.15 cal mol -1 K -1 and 17.22 cm 3 mol, respectively. Estimated standard molal heat capacities, entropies and volumes are given for a suite of smectites and illites commonly used in models of clay mineral and shale diagenesis.
Bioimpedance spectroscopy for the estimation of body fluid volumes in mice.
Chapman, M E; Hu, L; Plato, C F; Kohan, D E
2010-07-01
Conventional indicator dilution techniques for measuring body fluid volume are laborious, expensive, and highly invasive. Bioimpedance spectroscopy (BIS) may be a useful alternative due to being rapid, minimally invasive, and allowing repeated measurements. BIS has not been reported in mice; hence we examined how well BIS estimates body fluid volume in mice. Using C57/Bl6 mice, the BIS system demonstrated <5% intermouse variation in total body water (TBW) and extracellular (ECFV) and intracellular fluid volume (ICFV) between animals of similar body weight. TBW, ECFV, and ICFV differed between heavier male and lighter female mice; however, the ratio of TBW, ECFV, and ICFV to body weight did not differ between mice and corresponded closely to values in the literature. Furthermore, repeat measurements over 1 wk demonstrated <5% intramouse variation. Default resistance coefficients used by the BIS system, defined for rats, produced body composition values for TBW that exceeded body weight in mice. Therefore, body composition was measured in mice using a range of resistance coefficients. Resistance values at 10% of those defined for rats provided TBW, ECFV, and ICFV ratios to body weight that were similar to those obtained by conventional isotope dilution. Further evaluation of the sensitivity of the BIS system was determined by its ability to detect volume changes after saline infusion; saline provided the predicted changes in compartmental fluid volumes. In summary, BIS is a noninvasive and accurate method for the estimation of body composition in mice. The ability to perform serial measurements will be a useful tool for future studies.
Araki, Tadashi; Banchhor, Sumit K; Londhe, Narendra D; Ikeda, Nobutaka; Radeva, Petia; Shukla, Devarshi; Saba, Luca; Balestrieri, Antonella; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S
2016-03-01
Quantitative assessment of calcified atherosclerotic volume within the coronary artery wall is vital for cardiac interventional procedures. The goal of this study is to automatically measure the calcium volume, given the borders of coronary vessel wall for all the frames of the intravascular ultrasound (IVUS) video. Three soft computing fuzzy classification techniques were adapted namely Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) for automated segmentation of calcium regions and volume computation. These methods were benchmarked against previously developed threshold-based method. IVUS image data sets (around 30,600 IVUS frames) from 15 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/s). Calcium mean volume for FCM, K-means, HMRF and threshold-based method were 37.84 ± 17.38 mm(3), 27.79 ± 10.94 mm(3), 46.44 ± 19.13 mm(3) and 35.92 ± 16.44 mm(3) respectively. Cross-correlation, Jaccard Index and Dice Similarity were highest between FCM and threshold-based method: 0.99, 0.92 ± 0.02 and 0.95 + 0.02 respectively. Student's t-test, z-test and Wilcoxon-test are also performed to demonstrate consistency, reliability and accuracy of the results. Given the vessel wall region, the system reliably and automatically measures the calcium volume in IVUS videos. Further, we validated our system against a trained expert using scoring: K-means showed the best performance with an accuracy of 92.80%. Out procedure and protocol is along the line with method previously published clinically.
Are tidal volume measurements in neonatal pressure-controlled ventilation accurate?
Chow, Lily C; Vanderhal, Andre; Raber, Jorge; Sola, Augusto
2002-09-01
Bedside pulmonary mechanics monitors (PMM) have become useful in ventilatory management in neonates. These monitors are used more frequently due to recent improvements in data-processing capabilities. PMM devices are often part of the ventilator or are separate units. The accuracy and reliability of these systems have not been carefully evaluated. We compared a single ventilatory parameter, tidal volume (V(t)), as measured by several systems. We looked at two freestanding PMMs: the Ventrak Respiratory Monitoring System (Novametrix, Wallingford, CT) and the Bicore CP-100 Neonatal Pulmonary Monitor (Allied Health Care Products, Riverside, CA), and three ventilators with built-in PMM: the VIP Bird Ventilator (Bird Products Corp., Palm Springs, CA), Siemens Servo 300A (Siemens-Elema AB, Solna, Sweden), and Drager Babylog 8000 (Drager, Inc., Chantilly, VA). A calibrated syringe (Hans Rudolph, Inc., Kansas City, MO) was used to deliver tidal volumes of 4, 10, and 20 mL to each ventilator system coupled with a freestanding PMM. After achieving steady state, six consecutive V(t) readings were taken simultaneously from the freestanding PMM and each ventilator. In a second portion of the bench study, we used pressure-control ventilation and measured exhaled tidal volume (V(te)) while ventilating a Bear Test Lung with the same three ventilators. We adjusted peak inspiratory pressure (PIP) under controlled conditions to achieve the three different targeted tidal volumes on the paired freestanding PMM. Again, six V(te) measurements were recorded for each tidal volume. Means and standard deviations were calculated.The percentage difference in measurement of V(t) delivered by calibrated syringe varied greatly, with the greatest discrepancy seen in the smallest tidal volumes, by up to 28%. In pressure control mode, V(te) as measured by the Siemens was significantly overestimated by 20-95%, with the biggest discrepancy at the smallest V(te), particularly when paired with the Bicore
Unsupervised partial volume estimation using 3D and statistical priors
NASA Astrophysics Data System (ADS)
Tardif, Pierre M.
2001-07-01
Our main objective is to compute the volume of interest of images from magnetic resonance imaging (MRI). We suggest a method based on maximum a posteriori. Using texture models, we propose a new partial volume determination. We model tissues using generalized gaussian distributions fitted from a mixture of their gray levels and texture information. Texture information relies on estimation errors from multiresolution and multispectral autoregressive models. A uniform distribution solves large estimation errors, when dealing with unknown tissues. An initial segmentation, needed by the multiresolution segmentation deterministic relaxation algorithm, is found using an anatomical atlas. To model the a priori information, we use a full 3-D extension of Markov random fields. Our 3-D extension is straightforward, easily implemented, and includes single label probability. Using initial segmentation map and initial tissues models, iterative updates are made on the segmentation map and tissue models. Updating tissue models remove field inhomogeneities. Partial volumes are computed from final segmentation map and tissue models. Preliminary results are encouraging.
Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system
NASA Astrophysics Data System (ADS)
Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.
2016-05-01
This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.
Can virtual simulation of breast tangential portals accurately predict lung and heart volumes?
Cooke, Stacey; Rattray, Greg
2003-03-01
A treatment portal or simulator image has traditionally been used to demonstrate the lung and heart coverage of the breast tangential portal. In many cases, these images were acquired as a planning session on the linear accelerator. The patients were also CT scanned to assess the lung/heart volume and to determine the surgical site depth for the electron-boost energy. A study using 50 consecutive patients was performed comparing the digitally reconstructed radiograph (DRR) from the virtual simulation with treatment portal images. Modification to the patient's arm position is required when performing the planning CT scans due to the aperture size of the CT scanner. Virtual simulation was used to assess the potential variation of lung and heart measurements. The average difference in lung volume between the DRR and portal image was less than 2 mm, with a range of 0-5 mm. Arm position did not have a significant impact on field deviation; however, great care was taken to minimize any changes in arm position. The modification of the arm position for CT scanning did not lead to significant variations between the DRRs and portal images. The Advantage Sim software has proven capable of producing good quality DRR images, providing a realistic representation of the lung and heart volume included in the treatment portal.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Khademi, April; Venetsanopoulos, Anastasios; Moody, Alan R.
2014-01-01
Abstract. An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach. PMID:26158022
Gas Flaring Volume Estimates with Multiple Satellite Observations
NASA Astrophysics Data System (ADS)
Ziskin, D. C.; Elvidge, C.; Baugh, K.; Ghosh, T.; Hsu, F. C.
2010-12-01
Flammable gases (primarily methane) are a common bi-product associated with oil wells. Where there is no infrastructure to use the gas or bring it to market, the gases are typically flared off. This practice is more common at remote sites, such as an offshore drilling platform. The Defense Meteorological Satellite Program (DMSP) is a series of satellites with a low-light imager called the Operational Linescan System (OLS). The OLS, which detects the flares at night, has been a valuable tool in the estimation of flared gas volume [Elvidge et al, 2009]. The use of the Moderate Resolution Imaging Spectroradiometer (MODIS) fire product has been processed to create products suitable for an independent estimate of gas flaring on land. We are presenting the MODIS flare product, the results of our MODIS gas flare volume analysis, and independent validation of the published DMSP estimates. Elvidge, C. D., Ziskin, D., Baugh, K. E., Tuttle, B. T., Ghosh, T., Pack, D. W., Erwin, E. H., Zhizhin, M., 2009, "A Fifteen Year Record of Global Natural Gas Flaring Derived from Satellite Data", Energies, 2 (3), 595-622
Volume estimation of multi-density nodules with thoracic CT
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2014-03-01
The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.
NASA Astrophysics Data System (ADS)
Horwath, Martin; van den Broeke, Michiel R.; Lenaerts, Jan T. M.; Ligtenberg, Stefan R. M.; Legrésy, Benoît; Blarel, Fabien
2013-04-01
Knowing the interannual variations in the Antarctic ice sheet net snow accumulation, or surface mass balance (SMB), is essential for analyzing and interpreting present-day observations. For example, accumulation events like the one in East Antarctica in 2009 (Shepherd et al. 2012, Science, doi: 10.1126/science.1228102) challenge our ability to interpret observed decadal-scale trends in terms of long-term changes versus natural fluctuations. SMB variations cause changes in the firn density structure, which need to be accounted for when converting volume trends from satellite altimetry into mass trends. Recent assessments of SMB and firn volume variations mainly rely on atmospheric modeling and firn densification modeling (FDM). The modeling results need observational validation, which has been limited by now. Geodetic observations by satellite altimetry and satellite gravimetry reflect interannual firn volume and mass changes, among other signals like changes in ice flow dynamics. Therefore, these observations provide a means of validating modeling results over the observational period. We present comprehensive comparisons between interannual volume variations from ENVISAT radar altimetry (RA) and firn densification modeling (FDM), and between interannual mass variations from SMB modeling by the regional atmospheric climate model RACMO2 and GRACE satellite gravimetry. The comparisons are performed based on time series with approximately monthly sampling and with the overlapping period from 2002 to 2010. The RA-FDM comparison spans the spatial scales from 27 km to the continental scale. The mass comparison refers to the regional (drainage basin) and continental scale. Overall, we find good agreement between the interannual variations described by the models and by the geodetic observations. This agreement proves our ability to track and understand SMB-related ice sheet variations from year to year. The assessment of differences between modeling and observations
NASA Astrophysics Data System (ADS)
Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.
2013-12-01
layer results based on ';manual' sampling. The closest flux approximation was obtained using the river width-dependent model. The higher fluxes obtained by the chambers could partially be explained by an enhanced turbulence created in the chambers themselves, especially because the ratio between the water surface area and chamber volume was rather small. The high resolution combined sampling approach helped constrain K and determine which river model best fits Aare River emissions. This experimental setup ultimately allows us to (1) define the dependence of K, (2) measure CH4 and CO2 fluxes from the main river and different tributaries more accurately, (3) estimate more spatially-resolved fluxes via either models or water sampling and the newly found K, and (4) determine one of the fates of carbon in the Aare River.
NASA Astrophysics Data System (ADS)
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere
with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
Children Can Accurately Monitor and Control Their Number-Line Estimation Performance
ERIC Educational Resources Information Center
Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.
2016-01-01
Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
Technology Transfer Automated Retrieval System (TEKTRAN)
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
Tumor Volume Estimation and Quasi-Continuous Administration for Most Effective Bevacizumab Therapy
Sápi, Johanna; Kovács, Levente; Drexler, Dániel András; Kocsis, Pál; Gajári, Dávid; Sápi, Zoltán
2015-01-01
Background Bevacizumab is an exogenous inhibitor which inhibits the biological activity of human VEGF. Several studies have investigated the effectiveness of bevacizumab therapy according to different cancer types but these days there is an intense debate on its utility. We have investigated different methods to find the best tumor volume estimation since it creates the possibility for precise and effective drug administration with a much lower dose than in the protocol. Materials and Methods We have examined C38 mouse colon adenocarcinoma and HT-29 human colorectal adenocarcinoma. In both cases, three groups were compared in the experiments. The first group did not receive therapy, the second group received one 200 μg bevacizumab dose for a treatment period (protocol-based therapy), and the third group received 1.1 μg bevacizumab every day (quasi-continuous therapy). Tumor volume measurement was performed by digital caliper and small animal MRI. The mathematical relationship between MRI-measured tumor volume and mass was investigated to estimate accurate tumor volume using caliper-measured data. A two-dimensional mathematical model was applied for tumor volume evaluation, and tumor- and therapy-specific constants were calculated for the three different groups. The effectiveness of bevacizumab administration was examined by statistical analysis. Results In the case of C38 adenocarcinoma, protocol-based treatment did not result in significantly smaller tumor volume compared to the no treatment group; however, there was a significant difference between untreated mice and mice who received quasi-continuous therapy (p = 0.002). In the case of HT-29 adenocarcinoma, the daily treatment with one-twelfth total dose resulted in significantly smaller tumors than the protocol-based treatment (p = 0.038). When the tumor has a symmetrical, solid closed shape (typically without treatment), volume can be evaluated accurately from caliper-measured data with the applied two
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2016-06-01
The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Vonderhaar, T. H.; Johnson, L. R.; Laybe, P.; Reinke, D.
1987-01-01
The analysis of 18 convective clusters demonstrates that the extension of the Area-Time-Integral (ATI) technique to the use of satellite data is possible. The differences of the internal structures of the radar reflectivity features, and of the satellite features, give rise to differences in estimating rain volumes by delineating area; however, by focusing upon the area integrated over the lifetime of the storm, it is suggested that some of the errors produced by the differences in the cloud geometries as viewed by radar or satellite are minimized. The results are good and future developments should consider data from different climatic regions and should allow for implementation of the technique in a general circulation model.
Volume estimation of brain abnormalities in MRI data
NASA Astrophysics Data System (ADS)
Suprijadi, Pratama, S. H.; Haryanto, F.
2014-02-01
The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
The feasibility of rain volume estimation over fixed and floating areas was investigated using rapid scan satellite data following a technique recently developed with radar data, called the Area Time Integral (ATI) technique. The radar and rapid scan GOES satellite data were collected during the Cooperative Convective Precipitation Experiment (CCOPE) and North Dakota Cloud Modification Project (NDCMP). Six multicell clusters and cells were analyzed to the present time. A two-cycle oscillation emphasizing the multicell character of the clusters is demonstrated. Three clusters were selected on each day, 12 June and 2 July. The 12 June clusters occurred during the daytime, while the 2 July clusters during the nighttime. A total of 86 time steps of radar and 79 time steps of satellite images were analyzed. There were approximately 12-min time intervals between radar scans on the average.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
An investigation of the feasibility of rain volume estimation using satellite data following a technique recently developed with radar data called the Arera Time Integral was undertaken. Case studies were selected on the basis of existing radar and satellite data sets which match in space and time. Four multicell clusters were analyzed. Routines for navigation remapping amd smoothing of satellite images were performed. Visible counts were normalized for solar zenith angle. A radar sector of interest was defined to delineate specific radar echo clusters for each radar time throughout the radar echo cluster lifetime. A satellite sector of interest was defined by applying small adjustments to the radar sector using a manual processing technique. The radar echo area, the IR maximum counts and the IR counts matching radar echo areas were found to evolve similarly, except for the decaying phase of the cluster where the cirrus debris keeps the IR counts high.
Accurate liability estimation improves power in ascertained case-control studies.
Weissbrod, Omer; Lippert, Christoph; Geiger, Dan; Heckerman, David
2015-04-01
Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in nonrandomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (liability estimator as a phenotype; https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and we demonstrate that this can lead to a substantial power increase.
Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points
Zhang, Zimiao; Zhang, Shihai; Li, Qiu
2016-01-01
Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Rapid surface-water volume estimations in beaver ponds
NASA Astrophysics Data System (ADS)
Karran, Daniel J.; Westbrook, Cherie J.; Wheaton, Joseph M.; Johnston, Carol A.; Bedard-Haughn, Angela
2017-02-01
Beaver ponds are surface-water features that are transient through space and time. Such qualities complicate the inclusion of beaver ponds in local and regional water balances, and in hydrological models, as reliable estimates of surface-water storage are difficult to acquire without time- and labour-intensive topographic surveys. A simpler approach to overcome this challenge is needed, given the abundance of the beaver ponds in North America, Eurasia, and southern South America. We investigated whether simple morphometric characteristics derived from readily available aerial imagery or quickly measured field attributes of beaver ponds can be used to approximate surface-water storage among the range of environmental settings in which beaver ponds are found. Studied were a total of 40 beaver ponds from four different sites in North and South America. The simplified volume-area-depth (V-A-h) approach, originally developed for prairie potholes, was tested. With only two measurements of pond depth and corresponding surface area, this method estimated surface-water storage in beaver ponds within 5 % on average. Beaver pond morphometry was characterized by a median basin coefficient of 0.91, and dam length and pond surface area were strongly correlated with beaver pond storage capacity, regardless of geographic setting. These attributes provide a means for coarsely estimating surface-water storage capacity in beaver ponds. Overall, this research demonstrates that reliable estimates of surface-water storage in beaver ponds only requires simple measurements derived from aerial imagery and/or brief visits to the field. Future research efforts should be directed at incorporating these simple methods into both broader beaver-related tools and catchment-scale hydrological models.
Estimating stroke volume from oxygen pulse during exercise.
Crisafulli, Antonio; Piras, Francesco; Chiappori, Paolo; Vitelli, Stefano; Caria, Marcello A; Lobina, Andrea; Milia, Raffaele; Tocco, Filippo; Concu, Alberto; Melis, Franco
2007-10-01
This investigation aimed at verifying whether it was possible to reliably assess stroke volume (SV) during exercise from oxygen pulse (OP) and from a model of arterio-venous oxygen difference (a-vO(2)D) estimation. The model was tested in 15 amateur male cyclists performing an exercise test on a cycle-ergometer consisting of a linear increase of workload up to exhaustion. Starting from the analysis of previous published data, we constructed a model of a-vO(2)D estimation (a-vO(2)D(est)) which predicted that the a-vO(2)D at rest was 30% of the total arterial O(2) content (CaO(2)) and that it increased linearly during exercise reaching a value of 80% of CaO(2) at the peak workload (W(max)) of cycle exercise. Then, the SV was calculated by applying the following equation, SV = OP/a-vO(2)D(est), where the OP was assessed as the oxygen uptake/heart rate. Data calculated by our model were compared with those obtained by impedance cardiography. The main result was that the limits of agreement between the SV assessed by impedance cardiography and the SV estimated were between 22.4 and -27.9 ml (+18.8 and -24% in terms of per cent difference between the two SV measures). It was concluded that our model for estimating SV during effort may be reasonably applicable, at least in a healthy population.
Moore, James; Hays, David; Quinn, John; Johnson, Robert; Durham, Lisa
2013-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)
Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young
2016-08-24
The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O2 · kg(-1) · min(-1) and individually measured resting VO2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O2 · kg(-1) · min(-1). Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.
Estimating Intracranial Volume in Brain Research: An Evaluation of Methods.
Sargolzaei, Saman; Sargolzaei, Arman; Cabrerizo, Mercedes; Chen, Gang; Goryawala, Mohammed; Pinzon-Ardila, Alberto; Gonzalez-Arias, Sergio M; Adjouadi, Malek
2015-10-01
Intracranial volume (ICV) is a standard measure often used in morphometric analyses to correct for head size in brain studies. Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation across different subject groups in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and type of software most suitable for use in estimating the ICV measure. Four groups of 53 subjects are considered, including adult controls (AC, adults with Alzheimer's disease (AD), pediatric controls (PC) and group of pediatric epilepsy subjects (PE). Reference measurements were calculated for each subject by manually tracing intracranial cavity without sub-sampling. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (FreeSurfer Ver. 5.3.0, FSL Ver. 5.0, SPM8 and SPM12) were examined in their ability to automatically estimate ICV across the groups. Results on sub-sampling studies with a 95 % confidence showed that in order to keep the accuracy of the inter-leaved slice sampling protocol above 99 %, sampling period cannot exceed 20 mm for AC, 25 mm for PC, 15 mm for AD and 17 mm for the PE groups. The study assumes a priori knowledge about the population under study into the automated ICV estimation. Tuning of the parameters in FSL and the use of proper atlas in SPM showed significant reduction in the systematic bias and the error in ICV estimation via these automated tools. SPM12 with the use of pediatric template is found to be a more suitable candidate for PE group. SPM12 and FSL subjected to tuning are the more appropriate tools for the PC group. The random error is minimized for FS in AD group and SPM8 showed less systematic bias. Across the AC group, both SPM12 and FS performed well but SPM12 reported lesser amount of systematic bias.
Goudar, Chetan T
2011-10-01
We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.
Correcting Partial Volume Effect in Bi-exponential T2 Estimation of Small Lesions
Huang, Chuan; Galons, Jean-Philippe; Graff, Christian G.; Clarkson, Eric W.; Bilgin, Ali; Kalb, Bobby; Martin, Diego R.; Altbach, Maria I.
2014-01-01
Purpose T2 mapping provides a quantitative approach for focal liver lesion characterization. For small lesions a bi-exponential model should be used to account for partial volume effects (PVE). However, conventional bi-exponential fitting suffers from large uncertainty of the fitted parameters when noise is present. The purpose of this work is to develop a more robust method to correct for PVE affecting small lesions. Methods We developed a ROI-based joint bi-exponential fitting (JBF) algorithm to estimate the T2 of lesions affected by PVE. JBF takes advantage of the lesion fraction variation among voxels within an ROI. JBF is compared to conventional approaches using Cramér-Rao lower bound analysis, numerical simulations, phantom and -vivo data. Results JBF provides more accurate and precise T2 estimates in the presence of PVE. Furthermore, JBF is less sensitive to ROI drawing. Phantom and in-vivo results show that JBF can be combined with a reconstruction method for highly undersampled data, enabling the characterization of small abdominal lesions from data acquired in a single breath-hold. Conclusion The JBF algorithm provides more accurate and stable T2 estimates for small structures than conventional techniques when PVE is present. It should be particularly useful for the characterization of small abdominal lesions. PMID:24753061
Alpha's standard error (ASE): an accurate and precise confidence interval estimate.
Duhachek, Adam; Lacobucci, Dawn
2004-10-01
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.
Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle
NASA Technical Reports Server (NTRS)
VanEepoel, John; Thienel, Julie; Sanner, Robert M.
2006-01-01
In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.
Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle
NASA Technical Reports Server (NTRS)
Thienel, Julie K.; Sanner, Robert M.
2006-01-01
Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Spectral estimation from laser scanner data for accurate color rendering of objects
NASA Astrophysics Data System (ADS)
Baribeau, Rejean
2002-06-01
Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1985-01-01
Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.
Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.
2013-01-01
To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183
Accurate and unbiased estimation of power-law exponents from single-emitter blinking data.
Hoogenboom, Jacob P; den Otter, Wouter K; Offerhaus, Herman L
2006-11-28
Single emitter blinking with a power-law distribution for the on and off times has been observed on a variety of systems including semiconductor nanocrystals, conjugated polymers, fluorescent proteins, and organic fluorophores. The origin of this behavior is still under debate. Reliable estimation of power exponents from experimental data is crucial in validating the various models under consideration. We derive a maximum likelihood estimator for power-law distributed data and analyze its accuracy as a function of data set size and power exponent both analytically and numerically. Results are compared to least-squares fitting of the double logarithmically transformed probability density. We demonstrate that least-squares fitting introduces a severe bias in the estimation result and that the maximum likelihood procedure is superior in retrieving the correct exponent and reducing the statistical error. For a data set as small as 50 data points, the error margins of the maximum likelihood estimator are already below 7%, giving the possibility to quantify blinking behavior when data set size is limited, e.g., due to photobleaching.
How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?
Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.
2010-01-01
We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?
Van Loan, Marta D
2007-12-01
This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle
NASA Astrophysics Data System (ADS)
Timinis, Constantinos; Pitris, Costas
2016-03-01
The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.
Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo
NASA Astrophysics Data System (ADS)
Zhang, Chao; Zhou, Fugen; Xue, Bindang
2017-02-01
In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model.
Fadl, Omnia S; Abu-Elyazeed, Mohamed F; Abdelhalim, Mohamed B; Amer, Hassanein H; Madian, Ahmed H
2016-01-01
Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes' toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes' toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.
Techniques for accurate estimation of net discharge in a tidal channel
Simpson, Michael R.; Bland, Roger
1999-01-01
An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less
A Simple and Accurate Equation for Peak Capacity Estimation in Two Dimensional Liquid Chromatography
Li, Xiaoping; Stoll, Dwight R.; Carr, Peter W.
2009-01-01
Two dimensional liquid chromatography (2DLC) is a very powerful way to greatly increase the resolving power and overall peak capacity of liquid chromatography. The traditional “product rule” for peak capacity usually overestimates the true resolving power due to neglect of the often quite severe under-sampling effect and thus provides poor guidance for optimizing the separation and biases comparisons to optimized one dimensional gradient liquid chromatography. Here we derive a simple yet accurate equation for the effective two dimensional peak capacity that incorporates a correction for under-sampling of the first dimension. The results show that not only is the speed of the second dimension separation important for reducing the overall analysis time, but it plays a vital role in determining the overall peak capacity when the first dimension is under-sampled. A surprising subsidiary finding is that for relatively short 2DLC separations (much less than a couple of hours), the first dimension peak capacity is far less important than is commonly believed and need not be highly optimized, for example through use of long columns or very small particles. PMID:19053226
Accurate Estimation of Expression Levels of Homologous Genes in RNA-seq Experiments
NASA Astrophysics Data System (ADS)
Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran
Next generation high throughput sequencing (NGS) is poised to replace array based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naïve algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.
Accurate estimation of expression levels of homologous genes in RNA-seq experiments.
Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran
2011-03-01
Abstract Next generation high-throughput sequencing (NGS) is poised to replace array-based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naive algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood-based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Estimating Marine Aerosol Particle Volume and Number from Maritime Aerosol Network Data
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Smirnov, A.; Hsu, N. C.; Munchak, L. A.; Holben, B. N.
2012-01-01
As well as spectral aerosol optical depth (AOD), aerosol composition and concentration (number, volume, or mass) are of interest for a variety of applications. However, remote sensing of these quantities is more difficult than for AOD, as it is more sensitive to assumptions relating to aerosol composition. This study uses spectral AOD measured on Maritime Aerosol Network (MAN) cruises, with the additional constraint of a microphysical model for unpolluted maritime aerosol based on analysis of Aerosol Robotic Network (AERONET) inversions, to estimate these quantities over open ocean. When the MAN data are subset to those likely to be comprised of maritime aerosol, number and volume concentrations obtained are physically reasonable. Attempts to estimate surface concentration from columnar abundance, however, are shown to be limited by uncertainties in vertical distribution. Columnar AOD at 550 nm and aerosol number for unpolluted maritime cases are also compared with Moderate Resolution Imaging Spectroradiometer (MODIS) data, for both the present Collection 5.1 and forthcoming Collection 6. MODIS provides a best-fitting retrieval solution, as well as the average for several different solutions, with different aerosol microphysical models. The average solution MODIS dataset agrees more closely with MAN than the best solution dataset. Terra tends to retrieve lower aerosol number than MAN, and Aqua higher, linked with differences in the aerosol models commonly chosen. Collection 6 AOD is likely to agree more closely with MAN over open ocean than Collection 5.1. In situations where spectral AOD is measured accurately, and aerosol microphysical properties are reasonably well-constrained, estimates of aerosol number and volume using MAN or similar data would provide for a greater variety of potential comparisons with aerosol properties derived from satellite or chemistry transport model data.
Performance benchmarking of liver CT image segmentation and volume estimation
NASA Astrophysics Data System (ADS)
Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang
2008-03-01
In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.
Single-cell volume estimation by applying three-dimensional reconstruction methods
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
We have studied three-dimensional reconstruction methods to estimate the cell volume of astroglial cells in primary culture. The studies are based on fluorescence imaging and optical sectioning. An automated image-acquisition system was developed to collect two-dimensional microscopic images. Images were reconstructed by the Linear Maximum a Posteriori method and the non-linear Maximum Likelihood Expectation Maximization (ML-EM) method. In addition, because of the high computational demand of the ML-EM algorithm, we have developed a fast variant of this method. (1) Advanced image analysis techniques were applied for accurate and automated cell volume determination. (2) The sensitivity and accuracy of the reconstruction methods were evaluated by using fluorescent micro-beads with known diameter. The algorithms were applied to fura-2-labeled astroglial cells in primary culture exposed to hypo- or hyper-osmotic stress. The results showed that the ML-EM reconstructed images are adequate for the determination of volume changes in cells or parts thereof.
Danjon, Frédéric; Caplan, Joshua S; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately.
Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.
Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2017-01-06
OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate
[Research on maize multispectral image accurate segmentation and chlorophyll index estimation].
Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e
2015-01-01
In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray
The challenges of accurately estimating time of long bone injury in children.
Pickett, Tracy A
2015-07-01
The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum.
NASA Astrophysics Data System (ADS)
Guerdoux, Simon; Fourment, Lionel
2007-05-01
An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.
Takao, Seishin; Tadano, Shigeru; Taguchi, Hiroshi; Yasuda, Koichi; Onimaru, Rikiya; Ishikawa, Masayori; Bengua, Gerard; Suzuki, Ryusuke; Shirato, Hiroki
2011-11-01
Purpose: To establish a method for the accurate acquisition and analysis of the variations in tumor volume, location, and three-dimensional (3D) shape of tumors during radiotherapy in the era of image-guided radiotherapy. Methods and Materials: Finite element models of lymph nodes were developed based on computed tomography (CT) images taken before the start of treatment and every week during the treatment period. A surface geometry map with a volumetric scale was adopted and used for the analysis. Six metastatic cervical lymph nodes, 3.5 to 55.1 cm{sup 3} before treatment, in 6 patients with head and neck carcinomas were analyzed in this study. Three fiducial markers implanted in mouthpieces were used for the fusion of CT images. Changes in the location of the lymph nodes were measured on the basis of these fiducial markers. Results: The surface geometry maps showed convex regions in red and concave regions in blue to ensure that the characteristics of the 3D tumor geometries are simply understood visually. After the irradiation of 66 to 70 Gy in 2 Gy daily doses, the patterns of the colors had not changed significantly, and the maps before and during treatment were strongly correlated (average correlation coefficient was 0.808), suggesting that the tumors shrank uniformly, maintaining the original characteristics of the shapes in all 6 patients. The movement of the gravitational center of the lymph nodes during the treatment period was everywhere less than {+-}5 mm except in 1 patient, in whom the change reached nearly 10 mm. Conclusions: The surface geometry map was useful for an accurate evaluation of the changes in volume and 3D shapes of metastatic lymph nodes. The fusion of the initial and follow-up CT images based on fiducial markers enabled an analysis of changes in the location of the targets. Metastatic cervical lymph nodes in patients were suggested to decrease in size without significant changes in the 3D shape during radiotherapy. The movements of the
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Cumulative Ocean Volume Estimates of the Solar System
NASA Astrophysics Data System (ADS)
Frank, E. A.; Mojzsis, S. J.
2010-12-01
Although there has been much consideration for habitability in silicate planets and icy bodies, this information has never been quantitatively gathered into a single approximation encompassing our solar system from star to cometary halo. Here we present an estimate for the total habitable volume of the solar system by constraining our definition of habitable environments to those to which terrestrial microbial extremophiles could theoretically be transplanted and yet survive. The documented terrestrial extremophile inventory stretches environmental constraints for habitable temperature and pH space of T ~ -15oC to 121oC and pH ~ 0 to 13.5, salinities >35% NaCl, and gamma radiation doses of 10,000 to 11,000 grays [1]. Pressure is likely not a limiting factor to life [2]. We applied these criteria in our analysis of the geophysical habitable potential of the icy satellites and small icy bodies. Given the broad spectrum of environmental tolerance, we are optimistic that our pessimistic estimates are conservative. Beyond the reaches of our inner solar system's conventional habitable zone (Earth, Mars and perhaps Venus) is Ceres, a dwarf planet in the habitable zone that could possess a significant liquid water ocean if that water contains anti-freezing species [3]. Yet further out, Europa is a small icy satellite that has generated much excitement for astrobiological potential due to its putative subsurface liquid water ocean. It is widely promulgated that the icy moons Enceladus, Triton, Callisto, Ganymede, and Titan likewise have also sustained liquid water oceans. If oceans in Europa, Enceladus, and Triton have direct contact with a rocky mantle hot enough to melt, hydrothermal vents could provide an energy source for chemotrophic organisms. Although oceans in the remaining icy satellites may be wedged between two layers of ice, their potential for life cannot be precluded. Relative to the Jovian style of icy satellites, trans-neptunian objects (TNOs) - icy bodies
Luo, Xiongbiao
2014-06-15
electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.
Pinkerton, Steven D; Galletly, Carol L; McAuliffe, Timothy L; DiFranceisco, Wayne; Raymond, H Fisher; Chesson, Harrell W
2010-02-01
The sexual behaviors of HIV/sexually transmitted infection (STI) prevention intervention participants can be assessed on a partner-by-partner basis: in aggregate (i.e., total numbers of sex acts, collapsed across partners) or using a combination of these two methods (e.g., assessing five partners in detail and any remaining partners in aggregate). There is a natural trade-off between the level of sexual behavior detail and the precision of HIV/STI acquisition risk estimates. The results of this study indicate that relatively simple aggregate data collection techniques suffice to adequately estimate HIV risk. For highly infectious STIs, in contrast, accurate STI risk assessment requires more intensive partner-by-partner methods.
Semi-automatic border detection method for left ventricular volume estimation in 4D ultrasound data
NASA Astrophysics Data System (ADS)
van Stralen, Marijn; Bosch, Johan G.; Voormolen, Marco M.; van Burken, Gerard; Krenning, Boudewijn J.; van Geuns, Robert Jan M.; Angelie, Emmanuelle; van der Geest, Rob J.; Lancee, Charles T.; de Jong, Nico; Reiber, Johan H. C.
2005-04-01
We propose a semi-automatic endocardial border detection method for LV volume estimation in 3D time series of cardiac ultrasound data. It is based on pattern matching and dynamic programming techniques and operates on 2D slices of the 4D data requiring minimal user-interaction. We evaluated on data acquired with the Fast Rotating Ultrasound (FRU) transducer: a linear phased array transducer rotated at high speed around its image axis, generating high quality 2D images of the heart. We automatically select a subset of 2D images at typically 10 rotation angles and 16 cardiac phases. From four manually drawn contours a 4D shape model and a 4D edge pattern model is derived. For the selected images, contour shape and edge patterns are estimated using the models. Pattern matching and dynamic programming is applied to detect the contours automatically. The method allows easy corrections in the detected 2D contours, to iteratively achieve more accurate models and improved detections. An evaluation of this method on FRU data against MRI was done for full cycle LV volumes on 10 patients. Good correlations were found against MRI volumes (r=0.94, y=0.72x + 30.3, difference of 9.6 +/- 17.4 ml (Av +/- SD) ) and a low interobserver variability for US (r=0.94, y=1.11x - 16.8, difference of 1.4 +/- 14.2 ml). On average only 2.8 corrections per patient were needed (in a total of 160 images). Although the method shows good correlations with MRI without corrections, applying these corrections can make significant improvements.
NASA Technical Reports Server (NTRS)
Ferguson, Connor R.; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.; Laurie, Steven S.
2014-01-01
Orthostatic intolerance affects 60-80% of astronauts returning from long-duration missions, representing a significant risk to completing mission-critical tasks. While likely multifactorial, a reduction in stroke volume (SV) represents one factor contributing to orthostatic intolerance during stand and head up tilt (HUT) tests. Current measures of SV during stand or HUT tests use Doppler ultrasound and require a trained operator and specialized equipment, restricting its use in the field. BeatScope (Finapres Medical Systems BV, The Netherlands) uses a modelflow algorithm to estimate SV from continuous blood pressure waveforms in supine subjects; however, evidence supporting the use of Modelflow to estimate SV in subjects completing stand or HUT tests remain scarce. Furthermore, because the blood pressure device is held extended at heart level during HUT tests, but allowed to rest at the side during stand tests, changes in the finger arterial pressure waveform resulting from arm positioning could alter modelflow estimated SV. The purpose of this project was to compare Doppler ultrasound and BeatScope estimations of SV to determine if BeatScope can be used during stand or HUT tests. Finger photoplethysmography was used to acquire arterial pressure waveforms corrected for hydrostatic finger-to-heart height using the Finometer (FM) and Portapres (PP) arterial pressure devices in 10 subjects (5 men and 5 women) during a stand test while simultaneous estimates of SV were collected using Doppler ultrasound. Measures were made after 5 minutes of supine rest and while subjects stood for 5 minutes. Next, SV estimates were reacquired while each arm was independently raised to heart level, a position similar to tilt testing. Supine SV estimates were not significantly different between all three devices (FM: 68+/-20, PP: 71+/-21, US: 73+/-21 ml/beat). Upon standing, the change in SV estimated by FM (-18+/-8 ml) was not different from PP (-21+/-12), but both were significantly
Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure
2016-01-01
Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048
Using GIS to Estimate Lake Volume from Limited Data (Lake and Reservoir Management)
Estimates of lake volume are necessary for calculating residence time and modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of...
A novel method for blood volume estimation using trivalent chromium in rabbit models
Baby, Prathap Moothamadathil; Kumar, Pramod; Kumar, Rajesh; Jacob, Sanu S.; Rawat, Dinesh; Binu, V. S.; Karun, Kalesh M.
2014-01-01
Background: Blood volume measurement though important in management of critically ill-patients is not routinely estimated in clinical practice owing to labour intensive, intricate and time consuming nature of existing methods. Aims: The aim was to compare blood volume estimations using trivalent chromium [51Cr(III)] and standard Evans blue dye (EBD) method in New Zealand white rabbit models and establish correction-factor (CF). Materials and Methods: Blood volume estimation in 33 rabbits was carried out using EBD method and concentration determined using spectrophotometric assay followed by blood volume estimation using direct injection of 51Cr(III). Twenty out of 33 rabbits were used to find CF by dividing blood volume estimation using EBD with blood volume estimation using 51Cr(III). CF is validated in 13 rabbits by multiplying it with blood volume estimation values obtained using 51Cr(III). Results: The mean circulating blood volume of 33 rabbits using EBD was 142.02 ± 22.77 ml or 65.76 ± 9.31 ml/kg and using 51Cr(III) was estimated to be 195.66 ± 47.30 ml or 89.81 ± 17.88 ml/kg. The CF was found to be 0.77. The mean blood volume of 13 rabbits measured using EBD was 139.54 ± 27.19 ml or 66.33 ± 8.26 ml/kg and using 51Cr(III) with CF was 152.73 ± 46.25 ml or 71.87 ± 13.81 ml/kg (P = 0.11). Conclusions: The estimation of blood volume using 51Cr(III) was comparable to standard EBD method using CF. With further research in this direction, we envisage human blood volume estimation using 51Cr(III) to find its application in acute clinical settings. PMID:25190922
Driver, Nancy E.; Tasker, Gary D.
1990-01-01
Urban planners and managers need information on the quantity of precipitation and the quality and quantity of run off in their cities and towns if they are to adequately plan for the effects of storm runoff from urban areas. As a result of this need, four sets of linear regression models were developed for estimating storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads from physical, land-use, and climatic characteristics of urban watersheds in the United States. Thirty-four regression models of storm-runoff constituent loads and storm-runoff volumes were developed, and 31 models of storm-runoff mean concentrations were developed . Ten models of mean seasonal or mean annual constituent loads were developed by analyzing long-term storm-rainfall records using at-site linear regression models. Three statistically different regions, delineated on the basis of mean annual rainfall, were used to improve linear regression models where adequate data were available . Multiple regression analyses, including ordinary least squares and generalized least squares, were used to determine the optimum linear regression models . These models can be used to estimate storm-runoff constituent loads, storm-runoff volumes, storm-runoff mean concentrations of constituents, and mean seasonal or mean annual constituent loads at gaged and ungaged urban watersheds. The most significant explanatory variables in all linear regression models were total storm rainfall and total contributing drainage area. Impervious area, land-use, and mean annual climatic characteristics also were significant in some models. Models for estimating loads of dissolved solids, total nitrogen, and total ammonia plus organic nitrogen as nitrogen generally were the most accurate, whereas models for suspended solids were the least accurate. The most accurate models were those for application in the more arid Western
Walters, William A.; Lennon, Niall J.; Bochicchio, James; Krohn, Andrew; Pennanen, Taina
2016-01-01
ABSTRACT While high-throughput sequencing methods are revolutionizing fungal ecology, recovering accurate estimates of species richness and abundance has proven elusive. We sought to design internal transcribed spacer (ITS) primers and an Illumina protocol that would maximize coverage of the kingdom Fungi while minimizing nontarget eukaryotes. We inspected alignments of the 5.8S and large subunit (LSU) ribosomal genes and evaluated potential primers using PrimerProspector. We tested the resulting primers using tiered-abundance mock communities and five previously characterized soil samples. We recovered operational taxonomic units (OTUs) belonging to all 8 members in both mock communities, despite DNA abundances spanning 3 orders of magnitude. The expected and observed read counts were strongly correlated (r = 0.94 to 0.97). However, several taxa were consistently over- or underrepresented, likely due to variation in rRNA gene copy numbers. The Illumina data resulted in clustering of soil samples identical to that obtained with Sanger sequence clone library data using different primers. Furthermore, the two methods produced distance matrices with a Mantel correlation of 0.92. Nonfungal sequences comprised less than 0.5% of the soil data set, with most attributable to vascular plants. Our results suggest that high-throughput methods can produce fairly accurate estimates of fungal abundances in complex communities. Further improvements might be achieved through corrections for rRNA copy number and utilization of standardized mock communities. IMPORTANCE Fungi play numerous important roles in the environment. Improvements in sequencing methods are providing revolutionary insights into fungal biodiversity, yet accurate estimates of the number of fungal species (i.e., richness) and their relative abundances in an environmental sample (e.g., soil, roots, water, etc.) remain difficult to obtain. We present improved methods for high-throughput Illumina sequencing of the
Hardy, C.C.
1996-02-01
Guidelines in the form of a six-step approach are provided for estimating volumes, oven-dry mass, consumption, and particulate matter emissions for piled logging debris. Seven stylized pile shapes and their associated geometric volume formulae are used to estimate gross pile volumes. The gross volumes are then reduced to net wood volume by applying an appropriate wood-to-pile volume packing ratio. Next, the oven-dry mass of the pile is determined by using the wood density, or a weighted-average of two wood densitities, for any of 14 tree species commonly piled and burned in the Western United States. Finally, the percentage of biomass consumed is multiplied by an appropriate emission factor to determine the mass of PM, PM10, and PM2.5 produced from the burned pile. These estimates can be extended to represent multiple piles, or multiple groups of similar piles, to estimate the particulate emissions from an entire burn project.
NASA Astrophysics Data System (ADS)
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul
2015-01-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821
Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix
2015-12-01
In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.
A Theoretical Mathematical Model to Estimate Blood Volume in Clinical Practice.
D'Angelo, Matthew; Hodgen, R Kyle; Wofford, Kenneth; Vacchiano, Charles
2015-10-01
Perioperative intravenous (IV) fluid management is controversial. Fluid therapy is guided by inaccurate algorithms and changes in the patient's vital signs that are nonspecific for changes to the patient's blood volume (BV). Anesthetic agents, patient comorbidities, and surgical techniques interact and further confound clinical assessment of volume status. Through adaptation of existing acute normovolemic hemodilution algorithms, it may be possible to predict patient's BV by measuring hematocrit (HcT) before and after hemodilution. Our proposed mathematical model requires the following four data points to estimate a patient's total BV: ideal BV, baseline HcT, a known fluid bolus (FB), and a second HcT following the FB. To test our method, we obtained 10 ideal and 10 actual subject BV data measures from 9 unique subjects derived from a commercially used Food and Drug Administration-approved, semi-automated, BV analyzer. With these data, we calculated the theoretical BV change following a FB. Using the four required data points, we predicted BVs (BVp) and compared our predictions with the actual BV (BVa) measures provided by the data set. The BVp calculated using our model highly correlated with the BVa provided by the BV analyzer data set (df = 8, r = .99). Our calculations suggest that, with accurate HcT measurement, this method shows promise for the identification of abnormal BV states such as hyper- and hypovolemia and may prove to be a reliable method for titrating IV fluid.
Palatine tonsil volume estimation using different methods after tonsillectomy.
Sağıroğlu, Ayşe; Acer, Niyazi; Okuducu, Hacı; Ertekin, Tolga; Erkan, Mustafa; Durmaz, Esra; Aydın, Mesut; Yılmaz, Seher; Zararsız, Gökmen
2016-06-15
This study was carried out to measure the volume of the palatine tonsil in otorhinolaryngology outpatients with complaints of adenotonsillar hypertrophy and chronic tonsillitis who had undergone tonsillectomy. To date, no study has investigated palatine tonsil volume using different methods and compared with subjective tonsil size in the literature. For this purpose, we used three different methods to measure palatine tonsil volume. The correlation of each parameter with tonsil size was assessed. After tonsillectomy, palatine tonsil volume was measured by Archimedes, Cavalieri and Ellipsoid methods. Mean right-left palatine tonsil volumes were calculated as 2.63 ± 1.34 cm(3) and 2.72 ± 1.51 cm(3) by the Archimedes method, 3.51 ± 1.48 cm(3) and 3.37 ± 1.36 cm(3) by the Cavalieri method, and 2.22 ± 1.22 cm(3) and 2.29 ± 1.42 cm(3) by the Ellipsoid method, respectively. Excellent agreement was found among the three methods of measuring volumetric techniques according to Bland-Altman plots. In addition, tonsil grade was correlated significantly with tonsil volume.
NASA Astrophysics Data System (ADS)
Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang
2014-01-01
The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L
2006-07-01
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.
NASA Astrophysics Data System (ADS)
Omoniyi, Bayonle; Stow, Dorrik
2016-04-01
One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.
Glacier volume estimation of Cascade Volcanoes—an analysis and comparison with other methods
Driedger, Carolyn L.; Kennard, P.M.
1986-01-01
During the 1980 eruption of Mount St. Helens, the occurrence of floods and mudflows made apparent a need to assess mudflow hazards on other Cascade volcanoes. A basic requirement for such analysis is information about the volume and distribution of snow and ice on these volcanoes. An analysis was made of the volume-estimation methods developed by previous authors and a volume estimation method was developed for use in the Cascade Range. A radio echo-sounder, carried in a backpack, was used to make point measurements of ice thickness on major glaciers of four Cascade volcanoes (Mount Rainier, Washington; Mount Hood and the Three Sisters, Oregon; and Mount Shasta, California). These data were used to generate ice-thickness maps and bedrock topographic maps for developing and testing volume-estimation methods. Subsequently, the methods were applied to the unmeasured glaciers on those mountains and, as a test of the geographical extent of applicability, to glaciers beyond the Cascades having measured volumes. Two empirical relationships were required in order to predict volumes for all the glaciers. Generally, for glaciers less than 2.6 km in length, volume was found to be estimated best by using glacier area, raised to a power. For longer glaciers, volume was found to be estimated best by using a power law relationship, including slope and shear stress. The necessary variables can be estimated from topographic maps and aerial photographs.
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
Preoperative TRAM free flap volume estimation for breast reconstruction in lean patients.
Minn, Kyung Won; Hong, Ki Yong; Lee, Sang Woo
2010-04-01
To obtain pleasing symmetry in breast reconstruction with transverse rectus abdominis myocutaneous (TRAM) free flap, a large amount of abdominal flap is elevated and remnant tissue is trimmed in most cases. However, elevation of abundant abdominal flap can cause excessive tension in donor site closure and increase the possibility of hypertrophic scarring especially in lean patients. The TRAM flap was divided into 4 zones in routine manner; the depth and dimension of the 4 zones were obtained using ultrasound and AutoCAD (Autodesk Inc., San Rafael, CA), respectively. The acquired numbers were then multiplied to obtain an estimate of volume of each zone and the each zone volume was added. To confirm the relation between the estimated volume and the actual volume, authors compared intraoperative actual TRAM flap volumes with preoperative estimated volumes in 30 consecutive TRAM free flap breast reconstructions. The estimated volumes and the actual elevated volumes of flap were found to be correlated by regression analysis (r = 0.9258, P < 0.01). According to this result, we could confirm the reliability of the preoperative volume estimation using our method. Afterward, the authors applied this method to 7 lean patients by estimation and revision of the design and obtained symmetric results with minimal donor site morbidity. Preoperative estimation of TRAM flap volume with ultrasound and AutoCAD (Autodesk Inc.) allow the authors to attain the precise volume desired for elevation. This method provides advantages in terms of minimal flap trimming, easier closure of donor sites, reduced scar widening and symmetry, especially in lean patients.
System Development of Estimated Figures of Volume Production Plan
ERIC Educational Resources Information Center
Brazhnikov, Maksim A.; Khorina, Irina V.; Minina, Yulia I.; Kolyasnikova, Lyudmila V.; Streltsov, Aleksey V.
2016-01-01
The relevance of this problem is primarily determined by a necessity of improving production efficiency in conditions of innovative development of the economy and implementation of Import Substitution Program. The purpose of the article is development of set of criteria and procedures for the comparative assessment of alternative volume production…
Estimating Lost Volumes in a University Library Collection
ERIC Educational Resources Information Center
Niland, Powell; Kurth, William
1976-01-01
This study employed standard sampling theory to make a study of library book losses, but unlike previously reported studies, the investigators instituted periodic searches for volumes missing after the original search. Over a period of two years and nine months, the original figures were cut by more than 60 percent. (Author)
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
NASA Astrophysics Data System (ADS)
Iyatomi, Hitoshi; Hashimoto, Jun; Yoshii, Fumuhito; Kazama, Toshiki; Kawada, Shuichi; Imai, Yutaka
2014-03-01
Discrimination between Alzheimer's disease and other dementia is clinically significant, however it is often difficult. In this study, we developed classification models among Alzheimer's disease (AD), other dementia (OD) and/or normal subjects (NC) using patient factors and indices obtained by brain perfusion SPECT. SPECT is commonly used to assess cerebral blood flow (CBF) and allows the evaluation of the severity of hypoperfusion by introducing statistical parametric mapping (SPM). We investigated a total of 150 cases (50 cases each for AD, OD, and NC) from Tokai University Hospital, Japan. In each case, we obtained a total of 127 candidate parameters from: (A) 2 patient factors (age and sex), (B) 12 CBF parameters and 113 SPM parameters including (C) 3 from specific volume analysis (SVA), and (D) 110 from voxel-based analysis stereotactic extraction estimation (vbSEE). We built linear classifiers with a statistical stepwise feature selection and evaluated the performance with the leave-one-out cross validation strategy. Our classifiers achieved very high classification performances with reasonable number of selected parameters. In the most significant discrimination in clinical, namely those of AD from OD, our classifier achieved both sensitivity (SE) and specificity (SP) of 96%. In a similar way, our classifiers achieved a SE of 90% and a SP of 98% in AD from NC, as well as a SE of 88% and a SP of 86% in AD from OD and NC cases. Introducing SPM indices such as SVA and vbSEE, classification performances improved around 7-15%. We confirmed that these SPM factors are quite important for diagnosing Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Simon, M.; Bobskill, M. R.; Wilhite, A.
2012-11-01
Habitable volume is an important spacecraft design figure of merit necessary to determine the required size of crewed space vehicles, or habitats. In order to design habitats for future missions and properly compare the habitable volumes of future habitat designs with historical spacecraft, consistent methods of both defining the required amount of habitable volume and estimating the habitable volume for a given layout are required. This paper provides a brief summary of historical habitable volume requirements and describes the appropriate application of requirements to various types of missions, particularly highlighting the appropriate application for various gravity environments. Then the proposed "Marching Grid Method", a structured automatic, numerical method to calculate habitable volume for a given habitat design, is described in detail. This method uses a set of geometric Boolean tests applied to a discrete set of points within the pressurized volume to numerically estimate the functionally usable and accessible space that comprises the habitable volume. The application of this method to zero gravity and nonzero gravity environments is also discussed. This proposed method is then demonstrated by calculating habitable volumes using two conceptual-level layouts of habitat designs, one for each type of gravity environment. These include the US Laboratory Module on ISS and the Scenario 12.0 Pressurized Core Module from the recent NASA Lunar Surface Systems studies. Results of this study include a description of the effectiveness of this method for various resolutions of the investigated grid, and commentary highlighting the use of this method to determine the overall utility of interior configurations for automatically evaluating interior layouts.
A method of estimating flood volumes in western Kansas
Perry, C.A.
1984-01-01
Relationships between flood volume and peak discharge in western Kansas were developed considering basin and climatic characteristics in order to evaluate the availability of surface water in the area. Multiple-regression analyses revealed a relationship between flood volume, peak discharge, channel slope , and storm duration for basins smaller than 1,503 square miles. The equation VOL=0.536 PEAK1.71 SLOPE-0.85 DUR0.24, had a correlation coefficient of R=0.94 and a standard error of 0.33 log units (-53 and +113 percent). A better relationship for basins smaller than 228 square miles resulted in the equation VOL=0.483 PEAK0.98 SLOPE-0.74 AREA0.30, which had a correlation coefficient of R=0.90 and a standard error of 0.23 log units (-41 and +70 percent). (USGS)
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
Negroni, Jorge A; Lascano, Elena C; Bertolotti, Alejandro M; Gómez, Carmen B; Rodríguez Correa, Carlos A; Favaloro, Roberto R
2010-01-01
Use of a majority of structural variables (age, sex, height) to estimate oxygen consumption in the calculation of cardiac output (CO) by the Fick principle does not account for changes in physiological conditions. To improve this limitation, oxygen consumption was estimated based on the left ventricular pressure-volume area. A pilot study with 10 patients undergoing right cardiac catheterization showed that this approach was successful to estimate CO (r=0,73, vs. thermodilution measured CO). Further essays changing end-diastolic-volume in the pressure-volume area formula by body weight or body surface area showed that this last yielded the best correlation with the thermodilution measured CO (slope=1, ordinate =0.01 and r=0.93). These preliminary results indicate that use of a formula originated from the pressure-volume-area concept is a good alternative to estimate oxygen consumption for CO calculation.
Estimating the rock volume bias in paleobiodiversity studies.
Crampton, James S; Beu, Alan G; Cooper, Roger A; Jones, Craig M; Marshall, Bruce; Maxwell, Phillip A
2003-07-18
To interpret changes in biodiversity through geological time, it is necessary first to correct for biases in sampling effort related to variations in the exposure of rocks and recovery of fossils with age. Data from New Zealand indicate that outcrop area is likely to be a reliable proxy of rock volume in both stable cratonic regions, where the paleobiodiversity record is strongly correlated with relative sea level, and on tectonically active margins. In contrast, another potential proxy, the number of rock formations, is a poor predictor of outcrop area or sampling effort in the New Zealand case.
Lunar Architecture Team - Phase 2 Habitat Volume Estimation: "Caution When Using Analogs"
NASA Technical Reports Server (NTRS)
Rudisill, Marianne; Howard, Robert; Griffin, Brand; Green, Jennifer; Toups, Larry; Kennedy, Kriss
2008-01-01
The lunar surface habitat will serve as the astronauts' home on the moon, providing a pressurized facility for all crew living functions and serving as the primary location for a number of crew work functions. Adequate volume is required for each of these functions in addition to that devoted to housing the habitat systems and crew consumables. The time constraints of the LAT-2 schedule precluded the Habitation Team from conducting a complete "bottoms-up" design of a lunar surface habitation system from which to derive true volumetric requirements. The objective of this analysis was to quickly derive an estimated total pressurized volume and pressurized net habitable volume per crewmember for a lunar surface habitat, using a principled, methodical approach in the absence of a detailed design. Five "heuristic methods" were used: historical spacecraft volumes, human/spacecraft integration standards and design guidance, Earth-based analogs, parametric "sizing" tools, and conceptual point designs. Estimates for total pressurized volume, total habitable volume, and volume per crewmember were derived using these methods. All method were found to provide some basis for volume estimates, but values were highly variable across a wide range, with no obvious convergence of values. Best current assumptions for required crew volume were provided as a range. Results of these analyses and future work are discussed.
NASA Astrophysics Data System (ADS)
Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu
2011-10-01
To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.
NASA Astrophysics Data System (ADS)
Kassinopoulos, Michalis; Pitris, Costas
2016-03-01
The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.
Budget estimates: Fiscal year 1994. Volume 2: Construction of facilities
NASA Technical Reports Server (NTRS)
1994-01-01
The Construction of Facilities (CoF) appropriation provides contractual services for the repair, rehabilitation, and modification of existing facilities; the construction of new facilities and the acquisition of related collateral equipment; the acquisition or condemnation of real property; environmental compliance and restoration activities; the design of facilities projects; and advanced planning related to future facilities needs. Fiscal year 1994 budget estimates are broken down according to facility location of project and by purpose.
1998-02-01
This volume contains information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the six options described in Volume 1, Section 2: Option 1 -- Total removal clean closure; No subsequent use; Option 2 -- Risk-based clean closure; LLW fill; Option 3 -- Risk-based clean closure; CERCLA fill; Option 4 -- Close to RCRA landfill standards; LLW fill; Option 5 -- Close to RCRA landfill standards; CERCLA fill; and Option 6 -- Close to RCRA landfill standards; Clean fill. This volume is divided into two portions. The first portion contains the cost and planning schedule estimates while the second portion contains life-cycle costs and yearly cash flow information for each option.
NASA Astrophysics Data System (ADS)
Shin, Dong-Youn; Kim, Minsung
2017-02-01
Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μ m for the fictitious droplets of 50 μ m in diameter and -1.2 ± 0.3 μ m for the fictitious droplets of 30 μ m in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μ m . When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μ m at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.
Shin, Dong-Youn; Kim, Minsung
2017-02-01
Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μm for the fictitious droplets of 50 μm in diameter and -1.2 ± 0.3 μm for the fictitious droplets of 30 μm in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μm. When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μm at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.
Estimation of adipose compartment volumes in CT images of a mastectomy specimen
NASA Astrophysics Data System (ADS)
Imran, Abdullah-Al-Zubaer; Pokrajac, David D.; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been utilized for preclinical quantitative validation of breast imaging systems. Efficacy of the simulation-based validation depends on the realism of phantom images. Anatomical measurements of the breast tissue, such as the size and distribution of adipose compartments or the thickness of Cooper's ligaments, are essential for the realistic simulation of breast anatomy. Such measurements are, however, not readily available in the literature. In this study, we assessed the statistics of adipose compartments as visualized in CT images of a total mastectomy specimen. The specimen was preserved in formalin, and imaged using a standard body CT protocol and high X-ray dose. A human operator manually segmented adipose compartments in reconstructed CT images using ITK-SNAP software, and calculated the volume of each compartment. In addition, the time needed for the manual segmentation and the operator's confidence were recorded. The average volume, standard deviation, and the probability distribution of compartment volumes were estimated from 205 segmented adipose compartments. We also estimated the potential correlation between the segmentation time, operator's confidence, and compartment volume. The statistical tests indicated that the estimated compartment volumes do not follow the normal distribution. The compartment volumes are found to be correlated with the segmentation time; no significant correlation between the volume and the operator confidence. The performed study is limited by the mastectomy specimen position. The analysis of compartment volumes will better inform development of more realistic breast anatomy simulation.
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (J(v)) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (V(b)), the mean volumetric joint count (J(vb)) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (V(in)) and volumetric joint count (J(vi)) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements.
Elci, Hakan; Turk, Necdet
2014-01-01
Block volumes are generally estimated by analyzing the discontinuity spacing measurements obtained either from the scan lines placed over the rock exposures or the borehole cores. Discontinuity spacing measurements made at the Mesozoic limestone quarries in Karaburun Peninsula were used to estimate the average block volumes that could be produced from them using the suggested methods in the literature. The Block Quality Designation (BQD) ratio method proposed by the authors has been found to have given in the same order of the rock block volume to the volumetric joint count (Jv) method. Moreover, dimensions of the 2378 blocks produced between the years of 2009 and 2011 in the working quarries have been recorded. Assuming, that each block surfaces is a discontinuity, the mean block volume (Vb), the mean volumetric joint count (Jvb) and the mean block shape factor of the blocks are determined and compared with the estimated mean in situ block volumes (Vin) and volumetric joint count (Jvi) values estimated from the in situ discontinuity measurements. The established relations are presented as a chart to be used in practice for estimating the mean volume of blocks that can be obtained from a quarry site by analyzing the rock mass discontinuity spacing measurements. PMID:24696642
NASA Astrophysics Data System (ADS)
Bignami, Christian; Ruch, Joel; Chini, Marco; Neri, Marco; Buongiorno, Maria Fabrizia; Hidayati, Sri; Sayudi, Dewi Sri; Surono
2013-07-01
Pyroclastic density current deposits remobilized by water during periods of heavy rainfall trigger lahars (volcanic mudflows) that affect inhabited areas at considerable distance from volcanoes, even years after an eruption. Here we present an innovative approach to detect and estimate the thickness and volume of pyroclastic density current (PDC) deposits as well as erosional versus depositional environments. We use SAR interferometry to compare an airborne digital surface model (DSM) acquired in 2004 to a post eruption 2010 DSM created using COSMO-SkyMed satellite data to estimate the volume of 2010 Merapi eruption PDC deposits along the Gendol river (Kali Gendol, KG). Results show PDC thicknesses of up to 75 m in canyons and a volume of about 40 × 106 m3, mainly along KG, and at distances of up to 16 km from the volcano summit. This volume estimate corresponds mainly to the 2010 pyroclastic deposits along the KG - material that is potentially available to produce lahars. Our volume estimate is approximately twice that estimated by field studies, a difference we consider acceptable given the uncertainties involved in both satellite- and field-based methods. Our technique can be used to rapidly evaluate volumes of PDC deposits at active volcanoes, in remote settings and where continuous activity may prevent field observations.
Hand volume estimates based on a geometric algorithm in comparison to water displacement.
Mayrovitz, H N; Sims, N; Hill, C J; Hernandez, T; Greenshner, A; Diep, H
2006-06-01
Assessing changes in upper extremity limb volume during lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although arm volumes may be determined by tape measure, the suitability of circumference measurements to estimate hand volumes is questionable because of the deviation in circularity of hand shape. Our aim was to develop an alternative measurement procedure and algorithm for routine use to estimate hand volumes. A caliper was used to measure hand width and depth in 33 subjects (66 hands) and volumes (VE) were calculated using an elliptical frustum model. Using regression analysis and limits of agreement (LOA), VE was compared to volumes determined by water displacement (VW), to volumes calculated from tape-measure determined circumferences (VC), and to a trapezoidal model (VT). VW and VE (mean +/- SD) were similar (363 +/- 98 vs. 362 +/-100 ml) and highly correlated; VE = 1.01VW -3.1 ml, r=0.986, p<0.001, with LOA of +/- 33.5 ml and +/- 9.9 %. In contrast, VC (480 +/- 138 ml) and VT (432 +/- 122 ml) significantly overestimated volume (p<0.0001). These results indicate that the elliptical algorithm can be a useful alternative to water displacement when hand volumes are needed and the water displacement method is contra-indicated, impractical to implement, too time consuming or not available.
Estimation of tephra volumes from sparse and incompletely observed deposit thicknesses
NASA Astrophysics Data System (ADS)
Green, Rebecca M.; Bebbington, Mark S.; Jones, Geoff; Cronin, Shane J.; Turner, Michael B.
2016-04-01
We present a Bayesian statistical approach to estimate volumes for a series of eruptions from an assemblage of sparse proximal and distal tephra (volcanic ash) deposits. Most volume estimates are of widespread tephra deposits from large events using isopach maps constructed from observations at exposed locations. Instead, we incorporate raw thickness measurements, focussing on tephra thickness data from cores extracted from lake sediments and through swamp deposits. This facilitates investigation into the dispersal pattern and volume of tephra from much smaller eruption events. Given the general scarcity of data and the physical phenomena governing tephra thickness attenuation, a hybrid Bayesian-empirical tephra attenuation model is required. Point thickness observations are modeled as a function of the distance and angular direction of each location. The dispersal of tephra from larger well-estimated eruptions are used as leverage for understanding the smaller unknown events, and uncertainty in thickness measurements can be properly accounted for. The model estimates the wind and site-specific effects on the tephra deposits in addition to volumes. Our technique is exemplified on a series of tephra deposits from Mt Taranaki (New Zealand). The resulting estimates provide a comprehensive record suitable for supporting hazard models. Posterior mean volume estimates range from 0.02 to 0.26 km 3. Preliminary examination of the results suggests a size-predictable relationship.
Budget estimates: Fiscal year 1994. Volume 1: Agency summary
NASA Technical Reports Server (NTRS)
1994-01-01
The NASA FY 1994 budget request of $15,265 million concentrates on (1) investing in the development of new technologies including a particularly aggressive program in aeronautical technology to improve the competitive position of the United States, through shared involvement with industry and other government agencies; (2) continuing the nation's premier program of space exploration, to expand our knowledge of the solar system and the universe as well as the earth; and (3) providing safe and assured access to space using both the space shuttle and expendable launch vehicles. Budget estimates are presented for (1) research and development, including space station, space transportation capability development, space science and applications programs, space science, life and microgravity sciences and applications, mission to planet earth, space research and technology, commercial programs, aeronautics technology programs, safety and mission quality, academic programs, and tracking and data advanced systems; and (2) space operations, including space transportation programs, launch services, and space communications.
Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism
Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.
2016-01-01
Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the
Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism.
Katuwal, Gajendra J; Baum, Stefi A; Cahill, Nathan D; Dougherty, Chase C; Evans, Eli; Evans, David W; Moore, Gregory J; Michael, Andrew M
2016-01-01
Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the
Vegetation cover and volume estimates in semi-arid rangelands using LiDAR and hyperspectral data
NASA Astrophysics Data System (ADS)
Spaete, L.; Mitchell, J.; Glenn, N. F.; Shrestha, R.; Sankey, T. T.; Murgoitio, J.; Gould, S.; Leedy, T.; Hardegree, S. P.; Boise Center Aerospace Laboratory
2011-12-01
Sagebrush covers 1.1 x 106 km2 of North American rangelands and is an important cover type for many species. Like most vegetation, sagebrush cover and height varies across the landscape. Accurately mapping this variation is important for certain species, such as the greater sage-grouse, where sagebrush percent cover, visual cover and height are important characteristics for habitat selection. Cover and height are also important factors when trying to estimate rangeland biomass, which is an indicator of forage potential, species dominance and hydrologic function. Several studies have investigated the ability of remote sensing to accurately map vegetation cover, height and volume using a variety of remote sensing technologies. However, no known studies have used a combined spectral and spatial approach for integrative mapping of these characteristics. We demonstrate the ability of terrestrial laser scanning (TLS), airborne Light Detection and Ranging (LiDAR), hyperspectral imagery, and Object Based Image Analysis (OBIA) to accurately estimate sagebrush cover, height and biomass metrics for semi-arid rangeland environments.
Estimates of the Volume of Snowpack Sublimation in Arizona's Salt River Watershed
NASA Astrophysics Data System (ADS)
Svoma, B. M.
2012-12-01
The liquid equivalent volumes of snowpack sublimation, melt, and snowfall over the Salt River watershed, a major source of water for the Phoenix metropolitan area, will be estimated using the National Operational Hydrologic Remote Sensing Center's Snow Data Assimilation System (SNODAS) for the nine water years on record (i.e., 2004-2012). SNODAS integrates data from satellites, aircraft, and ground stations with downscaled output from numerical weather prediction models and an energy/mass balance snowpack model. The SNODAS dataset contains daily values of sublimation, snow water equivalent, snowfall, and melt, among other variables, at high (< 1 km2) resolution providing the opportunity to accurately estimate the volumes of snowpack balance variables for regions with complex topography. Snowpack ablation consists of sublimation and melting. Snow particles at sub-freezing temperatures will sublimate rather than melt if surrounded by air that is below the equilibrium water vapor pressure with respect to ice. When sublimation occurs, there is a direct loss of water from the given drainage basin when the vapor is carried away by the prevailing atmospheric flow. Preliminary analyses of water years 2005 (wet El Niño), 2007 (dry El Niño), 2008 (wet La Niña), and 2012 (dry La Niña) suggest that there is a substantial amount of sublimation over the Salt River watershed. From October 1 to April 30, approximately 16 percent of snowfall sublimated during the four years, ranging from approximately 98 million cubic meters (79,884 acre-feet) in water year 2005 to approximately 208 million cubic meters (168,726 acre-feet) in water year 2012. Sublimation is the most prevalent at the highest elevations of the watershed with more than 30 percent of snowfall sublimating at elevations above 2,744 meters above sea level. Of the four years analyzed, the sublimation to snowfall ratio was the highest for the two water years with anomalously high precipitation (i.e, 2005 and 2008). This
Advanced Composite Cost Estimating Manual. Volume II. Appendix
1976-08-01
0~~~~~~; - -m- O1 .,a 0 j in N j "s w f 0 0 Av .s 00 4 164 "i’ 3. -%’ 1*4*J. - N w 0-uo 16 fo moNi o w w -Q * * 0 h" v 0 a rq)E 4.1 w 0 -0 S p4 4 4...REkMOVE CHAIS 0.00J32 06COC43 I I 34 OPIN LC-OR 0*01402 o.ci’r2 I 31 PELEA *. 1.L. L JAO4 000035 22 36 RILtLA5E VAC. LINES 0.0031 !I kfMOVF lPAIk Ptk OVL...0 - NO 1- YES Ia YES INSERTS: 0 - NO 1 - YES 59 a *1 ACCEM-S COST PROJECTION CAM I UNIT MNER~aJboo~ AVE , LOT SIZiTYPE OF ESTIMATE UNIT COST aL. 21
Rapid estimate of solid volume in large tuff cores using a gas pycnometer
Thies, C.; Geddis, A.M.; Guzman, A.G.
1996-09-01
A thermally insulated, rigid-volume gas pycnometer system has been developed. The pycnometer chambers have been machined from solid PVC cylinders. Two chambers confine dry high-purity helium at different pressures. A thick-walled design ensures minimal heat exchange with the surrounding environment and a constant volume system, while expansion takes place between the chambers. The internal energy of the gas is assumed constant over the expansion. The ideal gas law is used to estimate the volume of solid material sealed in one of the chambers. Temperature is monitored continuously and incorporated into the calculation of solid volume. Temperature variation between measurements is less than 0.1{degrees}C. The data are used to compute grain density for oven-dried Apache Leap tuff core samples. The measured volume of solid and the sample bulk volume are used to estimate porosity and bulk density. Intrinsic permeability was estimated from the porosity and measured pore surface area and is compared to in-situ measurements by the air permeability method. The gas pycnometer accommodates large core samples (0.25 m length x 0.11 m diameter) and can measure solid volume greater than 2.20 cm{sup 3} with less than 1% error.
NASA Astrophysics Data System (ADS)
Liu, Yu; Yu, Xiping
2016-09-01
A coupled phase-field and volume-of-fluid method is developed to study the sensitive behavior of water waves during breaking. The THINC model is employed to solve the volume-of-fluid function over the entire domain covered by a relatively coarse grid while the phase-field model based on Allen-Cahn equation is applied over the fine grid. A special algorithm that takes into account the sharpness of the diffuse-interface is introduced to correlate the order parameter obtained on the fine grid and the volume-of-fluid function obtained on the coarse grid. The coupled model is then applied to the study of water waves generated by moving pressures on the free surface. The deformation process of the wave crest during the initial stage of breaking is discussed in details. It is shown that there is a significant variation of the free nappe developed at the front side of the wave crest as the wave steepness differs. It is of a plunging type at large wave steepness while of a spilling type at small wave steepness. The numerical results also indicate that breaking occurs later and the duration of breaking is shorter for waves of smaller steepness and vice versa. Neglecting the capillary effect leads to wave breaking with a sharper nappe and a more dynamic plunging process. The surface tension also has an effect to prevent the formation of a free nappe at the front side of the wave crest in some cases.
Jung, Sungwoon; Kim, Jounghwa; Kim, Jeongsoo; Hong, Dahee; Park, Dongjoo
2017-04-01
The objective of this study is to estimate the vehicle kilometer traveled (VKT) and on-road emissions using the traffic volume in urban. We estimated two VKT; one is based on registered vehicles and the other is based on traffic volumes. VKT for registered vehicles was 2.11 times greater than that of the applied traffic volumes because each VKT estimation method is different. Therefore, we had to define the inner VKT is moved VKT inner in urban to compare two values. Also, we focused on freight modes because these are discharged much air pollutant emissions. From analysis results, we found middle and large trucks registered in other regions traveled to target city in order to carry freight, target city has included many industrial and logistics areas. Freight is transferred through the harbors, large logistics centers, or via locations before being moved to the final destination. During this process, most freight is moved by middle and large trucks, and trailers rather than small trucks for freight import and export. Therefore, these trucks from other areas are inflow more than registered vehicles. Most emissions from diesel trucks had been overestimated in comparison to VKT from applied traffic volumes in target city. From these findings, VKT is essential based on traffic volume and travel speed on road links in order to estimate accurately the emissions of diesel trucks in target city. Our findings support the estimation of the effect of on-road emissions on urban air quality in Korea.
Hu, Tingting; Zhang, Zhen
2016-01-01
Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations
Improved pressure-volume-temperature method for estimation of cryogenic liquid volume
NASA Astrophysics Data System (ADS)
Seo, Mansu; Jeong, Sangkwon; Jung, Young-suk; Kim, Jakyung; Park, Hana
2012-04-01
One of the most important issues in a liquid propellant rocket is to measure the amount of remaining liquid propellant under low gravity environment during space mission. This paper presents the results of experiment and analysis of a pressure-volume-temperature (PVT) method which is a gauging method for low gravity environment. The experiment is conducted using 7.4 l tank for liquid nitrogen with various liquid-fill levels. To maximize the accuracy of a PVT method with minimum hardware, the technique of a helium injection with low mass flow rate is applied to maintain stable temperature profile in the ullage volume. The PVT analysis considering both pressurant and cryogen as a binary mixture is suggested. At high liquid-fill levels of 72-80%, the accuracy from the conventional PVT analysis is within 4.6%. At low fill levels of 27-30%, the gauging error is within 3.4% by mixture analysis of a PVT method with specific low mass flow rate of a helium injection. It is concluded that the proper mass flow rate of a helium injection and PVT analyses are crucial to enhance the accuracy of the PVT method with regard to various liquid-fill levels.
NASA Astrophysics Data System (ADS)
Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.
2015-12-01
Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi
Volcanic eruption volume flux estimations from very long period infrasound signals
NASA Astrophysics Data System (ADS)
Yamada, Taishi; Aoyama, Hiroshi; Nishimura, Takeshi; Iguchi, Masato; Hendrasto, Muhamad
2017-01-01
We examine very long period infrasonic signals accompanying volcanic eruptions near active vents at Lokon-Empung volcano in Indonesia, Aso, Kuchinoerabujima, and Kirishima volcanoes in Japan. The excitation of the very long period pulse is associated with an explosion, the emerging of an eruption column, and a pyroclastic density current. We model the excitation of the infrasound pulse, assuming a monopole source, to quantify the volume flux and cumulative volume of erupting material. The infrasound-derived volume flux and cumulative volume can be less than half of the video-derived results. A largely positive correlation can be seen between the infrasound-derived volume flux and the maximum eruption column height. Therefore, our result suggests that the analysis of very long period volcanic infrasound pulses can be helpful in estimating the maximum eruption column height.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
A Novel Calculation to Estimate Blood Volume and Hematocrit During Bypass
Trowbridge, Cody; Stammers, Alfred; Klayman, Myra; Brindisi, Nicholas
2008-01-01
Abstract: Patient blood volume impacts most facets of perfusion care, including volume management, transfusion practices, and pharmacologic interventions Unfortunately, there is a wide variability in individual blood volumes, and experimental measurement is not practical in the clinical environment. The purpose of this study was to evaluate a mathematical algorithm for estimating individual blood volume. After institutional review board approval, volumetric and transfusion data were prospectively collected for 165 patients and applied to a series of calculations. The resultant blood volume estimate (BVE) was used to predict the first and last bypass hematocrit. The estimated hematocrits using both BVE and 65 mL/kg were compared with measured hematocrits using the Pearson moment correlation coefficient and the Bland Altman measures of accuracy and precision. There was a wide range of BVE (minimum, 35 mL/kg; mean ± SD, 64 ± 22 mL/kg; maximum, 129 mL/kg). Using BVE, the estimated hematocrit was similar to the measured first (24.7 ± 6.4% vs. 24.5 ± 6.2%, r = 0.9884, p > .05) and last (24.5 ± 5.9% vs. 25.1 ± 5.7%, r = 0.9001, p > .05) bypass hematocrit. Using 65 mL/kg resulted in a larger difference between estimated and measured hematocrits for the first (25.6 ± 4.5% vs. 24.5 ± 6.2%, r = 0.6885, p = .030) and last (23.8 ± 3.6% vs. 25.1 ± 5.7%, r = 0.5990, p = .001) bypass hematocrits. Compared with using 65 mL/kg for blood volume, the BVE allowed for a more precise estimated hematocrit during CPB. PMID:18389667
Optimal volume Wegner estimate for random magnetic Laplacians on Z2
NASA Astrophysics Data System (ADS)
Hasler, David; Luckett, Daniel
2013-03-01
We consider a two dimensional magnetic Schrödinger operator on a square lattice with a spatially stationary random magnetic field. We prove a Wegner estimate with optimal volume dependence. The Wegner estimate holds around the spectral edges, and it implies Hölder continuity of the integrated density of states in this region. The proof is based on the Wegner estimate obtained in Erdős and Hasler ["Wegner estimate for random magnetic Laplacians on {{Z}}^2," Ann. Henri Poincaré 12, 1719-1731 (2012)], 10.1007/s00023-012-0177-9.
Acer, Niyazi; Ilıca, Ahmet Turan; Turgut, Ahmet Tuncay; Ozçelik, Ozlem; Yıldırım, Birdal; Turgut, Mehmet
2012-01-01
Pineal gland is a very important neuroendocrine organ with many physiological functions such as regulating circadian rhythm. Radiologically, the pineal gland volume is clinically important because it is usually difficult to distinguish small pineal tumors via magnetic resonance imaging (MRI). Although many studies have estimated the pineal gland volume using different techniques, to the best of our knowledge, there has so far been no stereological work done on this subject. The objective of the current paper was to determine the pineal gland volume using stereological methods and by the region of interest (ROI) on MRI. In this paper, the pineal gland volumes were calculated in a total of 62 subjects (36 females, 26 males) who were free of any pineal lesions or tumors. The mean ± SD pineal gland volumes of the point-counting, planimetry, and ROI groups were 99.55 ± 51.34, 102.69 ± 40.39, and 104.33 ± 40.45 mm(3), respectively. No significant difference was found among the methods of calculating pineal gland volume (P > 0.05). From these results, it can be concluded that each technique is an unbiased, efficient, and reliable method, ideally suitable for in vivo examination of MRI data for pineal gland volume estimation.
Estimation of convective rain volumes utilizing the are-time-integral technique
NASA Technical Reports Server (NTRS)
Johnson, L. Ronald; Smith, Paul L.
1990-01-01
Interest in the possibility of developing useful estimates of convective rainfall with Area-Time Integral (ATI) methods is increasing. The basis of the ATI technique is the observed strong correlation between rainfall volumes and ATI values. This means that rainfall can be estimated by just determining the ATI values, if previous knowledge of the relationship to rain volume is available to calibrate the technique. Examples are provided of the application of the ATI approach to gage, radar, and satellite measurements. For radar data, the degree of transferability in time and among geographical areas is examined. Recent results on transferability of the satellite ATI calculations are presented.
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M; Tozer, Gillian M; Paley, Martyn N J
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100μl to 10.000ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7μl for 100μl and 20μl for 10.000ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4s post-injection trigger signal and at 9-12s in tumor tissue. The pH of the injected pyruvate was 7.1±0.3 (mean±S.D., n=10). For small injection volumes, e.g. less than 100μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N.J.
2014-01-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3–4 s post-injection trigger signal and at 9–12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump. PMID:24355621
NASA Astrophysics Data System (ADS)
Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.
2014-02-01
Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.
An Approach to the Use of Depth Cameras for Weed Volume Estimation.
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-06-25
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.
An Approach to the Use of Depth Cameras for Weed Volume Estimation
Andújar, Dionisio; Dorado, José; Fernández-Quintanilla, César; Ribeiro, Angela
2016-01-01
The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. PMID:27347972
NASA Technical Reports Server (NTRS)
Dewberry, B.
2000-01-01
Electrical impedance spectrometry involves measurement of the complex resistance of a load at multiple frequencies. With this information in the form of impedance magnitude and phase, or resistance and reactance, basic structure or function of the load can be estimated. The "load" targeted for measurement and estimation in this study consisted of the water-bearing tissues of the human calf. It was proposed and verified that by measuring the electrical impedance of the human calf and fitting this data to a model of fluid compartments, the lumped-model volume of intracellular and extracellular spaces could be estimated, By performing this estimation over time, the volume dynamics during application of stimuli which affect the direction of gravity can be viewed. The resulting data can form a basis for further modeling and verification of cardiovascular and compartmental modeling of fluid reactions to microgravity as well as countermeasures to the headward shift of fluid during head-down tilt or spaceflight.
Estimating stem volume and biomass of Pinus koraiensis using LiDAR data.
Kwak, Doo-Ahn; Lee, Woo-Kyun; Cho, Hyun-Kook; Lee, Seung-Ho; Son, Yowhan; Kafatos, Menas; Kim, So-Ra
2010-07-01
The objective of this study was to estimate the stem volume and biomass of individual trees using the crown geometric volume (CGV), which was extracted from small-footprint light detection and ranging (LiDAR) data. Attempts were made to analyze the stem volume and biomass of Korean Pine stands (Pinus koraiensis Sieb. et Zucc.) for three classes of tree density: low (240 N/ha), medium (370 N/ha), and high (1,340 N/ha). To delineate individual trees, extended maxima transformation and watershed segmentation of image processing methods were applied, as in one of our previous studies. As the next step, the crown base height (CBH) of individual trees has to be determined; information for this was found in the LiDAR point cloud data using k-means clustering. The LiDAR-derived CGV and stem volume can be estimated on the basis of the proportional relationship between the CGV and stem volume. As a result, low tree-density plots had the best performance for LiDAR-derived CBH, CGV, and stem volume (R (2) = 0.67, 0.57, and 0.68, respectively) and accuracy was lowest for high tree-density plots (R (2) = 0.48, 0.36, and 0.44, respectively). In the case of medium tree-density plots accuracy was R (2) = 0.51, 0.52, and 0.62, respectively. The LiDAR-derived stem biomass can be predicted from the stem volume using the wood basic density of coniferous trees (0.48 g/cm(3)), and the LiDAR-derived above-ground biomass can then be estimated from the stem volume using the biomass conversion and expansion factors (BCEF, 1.29) proposed by the Korea Forest Research Institute (KFRI).
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
NASA Astrophysics Data System (ADS)
Li, Qin; Gavrielides, Marios A.; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2015-01-01
Measurements of lung nodule volume with multi-detector computed tomography (MDCT) have been shown to be more accurate and precise compared to conventional lower dimensional measurements. Quantifying the size of lesions is potentially more difficult when the object-to-background contrast is low as with lesions in the liver. Physical phantom and simulation studies are often utilized to analyze the bias and variance of lesion size estimates because a ground truth or reference standard can be established. In addition, it may also be useful to derive theoretical bounds as another way of characterizing lesion sizing methods. The goal of this work was to study the performance of a MDCT system for a lesion volume estimation task with object-to-background contrast less than 50 HU, and to understand the relation among performances obtained from phantom study, simulation and theoretical analysis. We performed both phantom and simulation studies, and analyzed the bias and variance of volume measurements estimated by a matched-filter-based estimator. We further corroborated results with a theoretical analysis to estimate the achievable performance bound, which was the Cramer-Rao’s lower bound (CRLB) of minimum variance for the size estimates. Results showed that estimates of non-attached solid small lesion volumes with object-to-background contrast of 31-46 HU can be accurate and precise, with less than 10.8% in percent bias and 4.8% in standard deviation of percent error (SPE), in standard dose scans. These results are consistent with theoretical (CRLB), computational (simulation) and empirical phantom bounds. The difference between the bounds is rather small (for SPE less than 1.9%) indicating that the theoretical- and simulation-based performance bounds can be good surrogates for physical phantom studies.
Seevers, P.M.; Sadowski, F.C.; Lauer, D.T.
1990-01-01
Retrospective satellite image data were evaluated for their ability to demonstrate the influence of center-pivot irrigation development in western Nebraska on spectral change and climate-related factors for the region. Periodic images of an albedo index and a normalized difference vegetation index (NDVI) were generated from calibrated Landsat multispectral scanner (MSS) data and used to monitor spectral changes associated with irrigation development from 1972 through 1986. The albedo index was not useful for monitoring irrigation development. For the NDVI, it was found that proportions of counties in irrigated agriculture, as discriminated by a threshold, were more highly correlated with reported ground estimates of irrigated agriculture than were county mean greenness values. A similar result was achieved when using coarse resolution Advanced Very High Resolution Radiometer (AVHRR) image data for estimating irrigated agriculture. The NDVI images were used to evaluate a procedure for making areal estimates of actual evapotranspiration (ET) volumes. Estimates of ET volumes for test counties, using reported ground acreages and corresponding standard crop coefficients, were correlated with the estimates of ET volume using crop coefficients scaled to NDVI values and pixel counts of crop areas. These county estimates were made under the assumption that soil water availability was unlimited. For nonirrigated vegetation, this may result in over-estimation of ET volumes. Ground information regarding crop types and acreages are required to derive the NDVI scaling factor. Potential ET, estimated with the Jensen-Haise model, is common to both methods. These results, achieved with both MSS and AVHRR data, show promise for providing climatologically important land surface information for regional and global climate models. ?? 1990 Kluwer Academic Publishers.
Ogris, Kathrin; Petrovic, Andreas; Scheicher, Sylvia; Sprenger, Hanna; Urschler, Martin; Hassler, Eva Maria; Yen, Kathrin; Scheurer, Eva
2017-03-01
In legal medicine, reliable localization and analysis of hematomas in subcutaneous fatty tissue is required for forensic reconstruction. Due to the absence of ionizing radiation, magnetic resonance imaging (MRI) is particularly suited to examining living persons with forensically relevant injuries. However, there is limited experience regarding MRI signal properties of hemorrhage in soft tissue. The aim of this study was to evaluate MR sequences with respect to their ability to show high contrast between hematomas and subcutaneous fatty tissue as well as to reliably determine the volume of artificial hematomas. Porcine tissue models were prepared by injecting blood into the subcutaneous fatty tissue to create artificial hematomas. MR images were acquired at 3T and four blinded observers conducted manual segmentation of the hematomas. To assess segmentability, the agreement of measured volume with the known volume of injected blood was statistically analyzed. A physically motivated normalization taking into account partial volume effect was applied to the data to ensure comparable results among differently sized hematomas. The inversion recovery sequence exhibited the best segmentability rate, whereas the T1T2w turbo spin echo sequence showed the most accurate results regarding volume estimation. Both sequences led to reproducible volume estimations. This study demonstrates that MRI is a promising forensic tool to assess and visualize even very small amounts of blood in soft tissue. The presented results enable the improvement of protocols for detection and volume determination of hemorrhage in forensically relevant cases and also provide fundamental knowledge for future in-vivo examinations.
Determination of thigh volume in youth with anthropometry and DXA: agreement between estimates.
Coelho-E-Silva, Manuel J; Malina, Robert M; Simões, Filipe; Valente-Dos-Santos, João; Martins, Raul A; Vaz Ronque, Enio R; Petroski, Edio L; Minderico, Claudia; Silva, Analiza M; Baptista, Fátima; Sardinha, Luís B
2013-01-01
This study examined the agreement between estimates of thigh volume (TV) with anthropometry and dual-energy x-ray absorptiometry (DXA) in healthy school children. Participants (n=168, 83 boys and 85 girls) were school children 10.0-13.9 years of age. In addition to body mass, height and sitting height, anthropometric dimensions included those needed to estimate TV using the equation of Jones & Pearson. Total TV was also estimated with DXA. Agreement between protocols was examined using linear least products regression (Deming regressions). Stepwise regression of log-transformed variables identified variables that best predicted TV estimated by DXA. The regression models were then internally validated using the predicted residual sum of squares method. Correlation between estimates of TV was 0.846 (95%CI: 0.796-0.884, Sy·x=0.152 L). It was possible to obtain an anthropometry-based model to improve the prediction of TVs in youth. The total volume by DXA was best predicted by adding body mass and sum of skinfolds to volume estimated with the equation of Jones & Pearson (R=0.972; 95%CI: 0.962-0.979; R (2)=0.945).
Radar volume reflectivity estimation using an array of ground-based rainfall drop size detectors
NASA Astrophysics Data System (ADS)
Lane, John; Merceret, Francis; Kasparis, Takis; Roy, D.; Muller, Brad; Jones, W. Linwood
2000-08-01
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
Tao, Rong; Popescu, Elena-Anda; Drake, William B; Jackson, David N; Popescu, Mihai
2012-04-01
Previous studies based on fetal magnetocardiographic (fMCG) recordings used simplified volume conductor models to estimate the fetal cardiac vector as an unequivocal measure of the cardiac source strength. However, the effect of simplified volume conductor modeling on the accuracy of the fMCG inverse solution remains largely unknown. Aiming to determine the sensitivity of the source estimators to the details of the volume conductor model, we performed simulations using fetal-maternal anatomical information from ultrasound images obtained in 20 pregnant women in various stages of pregnancy. The magnetic field produced by a cardiac source model was computed using the boundary-element method for a piecewise homogeneous volume conductor with three nested compartments (fetal body, amniotic fluid and maternal abdomen) of different electrical conductivities. For late gestation, we also considered the case of a fourth highly insulating layer of vernix caseosa covering the fetus. The errors introduced for simplified volume conductors were assessed by comparing the reconstruction results obtained with realistic versus spherically symmetric models. Our study demonstrates the significant effect of simplified volume conductor modeling, resulting mainly in an underestimation of the cardiac vector magnitude and low goodness-of-fit. These findings are confirmed by the analysis of real fMCG data recorded in mid-gestation.
Gas hydrate volume estimations on the South Shetland continental margin, Antarctic Peninsula
Jin, Y.K.; Lee, M.W.; Kim, Y.; Nam, S.H.; Kim, K.J.
2003-01-01
Multi-channel seismic data acquired on the South Shetland margin, northern Antarctic Peninsula, show that Bottom Simulating Reflectors (BSRs) are widespread in the area, implying large volumes of gas hydrates. In order to estimate the volume of gas hydrate in the area, interval velocities were determined using a 1-D velocity inversion method and porosities were deduced from their relationship with sub-bottom depth for terrigenous sediments. Because data such as well logs are not available, we made two baseline models for the velocities and porosities of non-gas hydrate-bearing sediments in the area, considering the velocity jump observed at the shallow sub-bottom depth due to joint contributions of gas hydrate and a shallow unconformity. The difference between the results of the two models is not significant. The parameters used to estimate the total volume of gas hydrate in the study area were 145 km of total length of BSRs identified on seismic profiles, 350 m thickness and 15 km width of gas hydrate-bearing sediments, and 6.3% of the average volume gas hydrate concentration (based on the second baseline model). Assuming that gas hydrates exist only where BSRs are observed, the total volume of gas hydrates along the seismic profiles in the area is about 4.8 ?? 1010 m3 (7.7 ?? 1012 m3 volume of methane at standard temperature and pressure).
Tao, Rong; Popescu, Elena-Anda; Drake, William B.; Jackson, David N.; Popescu, Mihai
2012-01-01
Previous studies based on fetal magnetocardiographic (fMCG) recordings used simplified volume conductor models to estimate the fetal cardiac vector as an unequivocal measure of the cardiac source strength. However, the effect of simplified volume conductor modeling on the accuracy of the fMCG inverse solution remains largely unknown. Aiming to determine the sensitivity of the source estimators to the details of the volume conductor model, we performed simulations using fetal-maternal anatomical information from ultrasound images obtained in 20 pregnant women in various stages of pregnancy. The magnetic field produced by a cardiac source model was computed using the boundary element method for a piecewise homogeneous volume conductor with three nested compartments (fetal body, amniotic fluid and maternal abdomen) of different electrical conductivities. For late gestation, we also considered the case of a fourth highly insulating layer of vernix caseosa covering the fetus. The errors introduced for simplified volume conductors were assessed by comparing the reconstruction results obtained with realistic versus spherically symmetric models. Our study demonstrates a significant effect of simplified volume conductor modeling, resulting mainly in an underestimation of the cardiac vector magnitude and low goodness-of-fit. These findings are confirmed by the analysis of real fMCG data recorded in mid-gestation. PMID:22442179
Zvietcovich, Fernando; Castañeda, Benjamin; Valencia, Braulio; Llanos-Cuentas, Alejandro
2012-01-01
Clinical assessment and outcome metrics are serious weaknesses identified on the systematic reviews of cutaneous Leishmaniasis wounds. Methods with high accuracy and low-variability are required to standarize study outcomes in clinical trials. This work presents a precise, complete and noncontact 3D assessment tool for monitoring the evolution of cutaneous Leishmaniasis (CL) wounds based on a 3D laser scanner and computer vision algorithms. A 3D mesh of the wound is obtained by a commercial 3D laser scanner. Then, a semi-automatic segmentation using active contours is performed to separate the ulcer from the healthy skin. Finally, metrics of volume, area, perimeter and depth are obtained from the mesh. Traditional manual 3D and 3D measurements are obtained as a gold standard. Experiments applied to phantoms and real CL wounds suggest that the proposed 3D assessment tool provides higher accuracy (error <2%) and precision rates (error <4%) than conventional manual methods (precision error < 35%). This 3D assessment tool provides high accuracy metrics which deserve more formal prospective study.
Cost and price estimate of Brayton and Stirling engines in selected production volumes
NASA Technical Reports Server (NTRS)
Fortgang, H. R.; Mayers, H. F.
1980-01-01
The methods used to determine the production costs and required selling price of Brayton and Stirling engines modified for use in solar power conversion units are presented. Each engine part, component and assembly was examined and evaluated to determine the costs of its material and the method of manufacture based on specific annual production volumes. Cost estimates are presented for both the Stirling and Brayton engines in annual production volumes of 1,000, 25,000, 100,000 and 400,000. At annual production volumes above 50,000 units, the costs of both engines are similar, although the Stirling engine costs are somewhat lower. It is concluded that modifications to both the Brayton and Stirling engine designs could reduce the estimated costs.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Tolstoy, M.
2010-12-01
To fully understand the environmental and ecological impacts of this disaster, an accurate estimate of the total oil released is required. We use optical plume velocimetry (OPV) to estimate the velocity of fluids issuing from the damaged well before and after the collapsed riser pipe was removed, and then estimate the volumetric flow rate under a range of assumptions. OPV was developed for measuring flow rates of fluids exiting black smoker hydrothermal vents on the deep sea floor (Crone et al., 2008). Black smoker vents produce a type of flow known as a turbulent buoyant jet and the Gulf of Mexico leak was a flow of the same class. Traditional optical fluid flow measurements use spatial cross-correlation methods that have been shown to underestimate flow rates when applied to turbulent buoyant jets. OPV uses interpolated temporal cross-correlation functions of image intensity across the entire region of interest to estimate image velocity. There are several potential sources of uncertainty in these calculations which may include errors associated with the estimation of the spatial resolution of the imagery, the image velocity field, the shear layer correction factor, and the area over which fluid is flowing, as well as uncertainty regarding the liquid oil fraction and temporal variability. We will describe in detail the effects of these uncertainties on the volume calculation. Using a liquid oil fraction of 0.4, we estimate the average flow rate from April 22nd to June 3rd , before the riser was removed, to be 55.9 x 103 barrels of oil per day (+/- 21%), excluding secondary leaks. After the riser was removed the total flow was 67.5 x 103 barrels/day (+/- 19%). Taking into account the oil collected by BP at the well head, our preliminary estimate of the total oil released is 4.37 x 106 barrels. (+/- 20%).
NASA Astrophysics Data System (ADS)
Kim, K. M.
2016-06-01
Traditional field methods for measuring tree heights are often too costly and time consuming. An alternative remote sensing approach is to measure tree heights from digital stereo photographs which is more practical for forest managers and less expensive than LiDAR or synthetic aperture radar. This work proposes an estimation of stand height and forest volume(m3/ha) using normalized digital surface model (nDSM) from high resolution stereo photography (25cm resolution) and forest type map. The study area was located in Mt. Maehwa model forest in Hong Chun-Gun, South Korea. The forest type map has four attributes such as major species, age class, DBH class and crown density class by stand. Overlapping aerial photos were taken in September 2013 and digital surface model (DSM) was created by photogrammetric methods(aerial triangulation, digital image matching). Then, digital terrain model (DTM) was created by filtering DSM and subtracted DTM from DSM pixel by pixel, resulting in nDSM which represents object heights (buildings, trees, etc.). Two independent variables from nDSM were used to estimate forest stand volume: crown density (%) and stand height (m). First, crown density was calculated using canopy segmentation method considering live crown ratio. Next, stand height was produced by averaging individual tree heights in a stand using Esri's ArcGIS and the USDA Forest Service's FUSION software. Finally, stand volume was estimated and mapped using aerial photo stand volume equations by species which have two independent variables, crown density and stand height. South Korea has a historical imagery archive which can show forest change in 40 years of successful forest rehabilitation. For a future study, forest volume change map (1970s-present) will be produced using this stand volume estimation method and a historical imagery archive.
Estimation of single cell volume from 3D confocal images using automatic data processing
NASA Astrophysics Data System (ADS)
Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.
2012-06-01
Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
Ades, A E; Cliffe, S
2002-01-01
Decision models are usually populated 1 parameter at a time, with 1 item of information informing each parameter. Often, however, data may not be available on the parameters themselves but on several functions of parameters, and there may be more items of information than there are parameters to be estimated. The authors show how in these circumstances all the model parameters can be estimated simultaneously using Bayesian Markov chain Monte Carlo methods. Consistency of the information and/or the adequacy of the model can also be assessed within this framework. Statistical evidence synthesis using all available data should result in more precise estimates of parameters and functions of parameters, and is compatible with the emphasis currently placed on systematic use of evidence. To illustrate this, WinBUGS software is used to estimate a simple 9-parameter model of the epidemiology of HIV in women attending prenatal clinics, using information on 12 functions of parameters, and to thereby compute the expected net benefit of 2 alternative prenatal testing strategies, universal testing and targeted testing of high-risk groups. The authors demonstrate improved precision of estimates, and lower estimates of the expected value of perfect information, resulting from the use of all available data.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-05-07
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.
Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris
2012-01-01
Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa
Dual-Band Radar Estimation of Stem Volume in Boreal Forest
NASA Astrophysics Data System (ADS)
Rauste, Yrjo; Astola, Heikki; Ahola, Heikki; Hame, Tuomas; von Poncet, Felicitas
2010-12-01
Forest stem volume information is needed in planning of sustainable forestry, mapping of exploitable forest resources, carbon balance studies and many other environmental applications. Radar sensors offer an efficient and weather-independent means for stem volume mapping. The radar dataset consisted of an ALOS/PALSAR dual-polarised scene from September 2008 and four TerraSAR-X spotlight scenes from February-March and July-August 2009. Ground data consisted of plot-wise data and stand-wise data. Regression models were developed with stand-wise training data where the stem volume varied between 0 and 390 m3/ha. The best 3-predictor model - 2 ALOS-PALSAR amplitudes and the phase difference between HH and VV data in a TerraSAR-X scene - produced an RMSE of 46 m3/ha (R2 = 0.7) when evaluated against stands not used in the model training. Stem volume estimation with plot-wise ground data produced lower estimation accuracies. The main reason was most likely misregistration between the opposite- looking ALOS/PALSAR and TerraSAR-X scenes, which was caused by canopy height and which was not corrected by ortho-rectification with terrain elevation model. In future work, similar look directions should be used when no canopy surface model is available for ortho-rectification. A filtering approach was developed for using the stand-wise stem-volume model in areas with no forest stand information.
Forest Stand Volume Estimation Using Airborne LIDAR And Polarimetric SAR Over Hilly Region
NASA Astrophysics Data System (ADS)
Fan, Fengyun; Chen, Erxue; Li, Zengyuan; Liu, Qingwang; Li, Shiming; Ling, Feilong
2010-10-01
In order to investigate the potential capability of mapping forest stand volume using the multi-sources data, ALOS PALSAR, airborne LiDAR and high resolution CCD image in forest stand level, one test site located in the warm temperate hilly forest region of Shandong Province in China was established. Airborne LiDAR and CCD campaign was carried out in the end of May, 2005. One scene of ALOS PALSAR quad-polarization image was acquired in May 19th,2007. Ground forest plot data for Black Locust and Chinese Pine dominated forest stands were collected through field work from May to June of 2008. The correlations of forest stand volume to PALSAR backscattering coefficient of HH, HV, VH,VV, their ratio and some H-Alpha polarimetric decomposition parameters were analyzed in stand level through regression analysis. Mean forest stand volume of each polygons (forest stand) was finally estimated based on the regression model established using ground measured forest volume data and the corresponding parameters (polygon mean) derived from LiDAR CHM and polarimetric SAR data. Results show that it is feasible to combine low density LiDAR data, L-band SAR data and forest polygon data from high resolution CCD image for stand level forest volume estimation in hilly regions, the RMSE is 20.064m3/ha for Black Locust and 24.730m3/ha for Chinese Pine .
Forest Stand Volume Estimation Using Airborne LIDAR And Polarimetric SAR Over Hilly Region
NASA Astrophysics Data System (ADS)
Fan, Fengyun; Chen, Erxue; Li, Zengyuan; Liu, Qingwang; Li, Shiming; Ling, Feilong; Pottier, Eric; Cloude, Shane
2010-10-01
In order to investigate the potential capability of mapping forest stand volume using the multi-sources data, ALOS PALSAR, airborne LiDAR and high resolution CCD image in forest stand level, one test site located in the warm temperate hilly forest region of Shandong Province in China was established. Airborne LiDAR and CCD campaign was carried out in the end of May, 2005. One scene of ALOS PALSAR quad-polarization image was acquired in May 19th,2007. Ground forest plot data for Black Locust and Chinese Pine dominated forest stands were collected through field work from May to June of 2008. The correlations of forest stand volume to PALSAR backscattering coefficient of HH, HV,VH,VV, their ratio and some H-Alpha polarimetric decomposition parameters were analyzed in stand level through regression analysis. Mean forest stand volume of each polygons (forest stand) was finally estimated based on the regression model established using ground measured forest volume data and the corresponding parameters (polygon mean) derived from LiDAR CHM and polarimetric SAR data. Results show that it is feasible to combine low density LiDAR data, L-band SAR data and forest polygon data from high resolution CCD image for stand level forest volume estimation in hilly regions, the RMSE is 20.064m3/ha for Black Locust and 24.730m3/ha for Chinese Pine .
A progressive black top hat transformation algorithm for estimating valley volumes on Mars
NASA Astrophysics Data System (ADS)
Luo, Wei; Pingel, Thomas; Heo, Joon; Howard, Alan; Jung, Jaehoon
2015-02-01
The depth of valley incision and valley volume are important parameters in understanding the geologic history of early Mars, because they are related to the amount sediments eroded and the quantity of water needed to create the valley networks (VNs). With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. Previous studies typically use a single window size for extracting the valley features and a single threshold value for removing noise, resulting in finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to remove above-ground features to obtain bare-earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the window size is progressively increased to extract valleys of different orders. In addition, a slope factor is introduced so that the noise threshold can be automatically adjusted for windows with different sizes. Independently derived VN lines were used to select mask polygons that spatially overlap the VN lines. Volume is calculated as the sum of valley depth within the selected mask multiplied by cell area. Application of the PBTH to a simulated landform (for which the amount of erosion is known) achieved an overall relative accuracy of 96%, in comparison with only 78% for BTH. Application of PBTH to Ma'adim Vallies on Mars not only produced total volume estimates consistent with previous studies, but also revealed the detailed spatial distribution of valley depth. The highly automated PBTH algorithm shows great promise for estimating the volume of VN on Mars on global scale, which is important for understanding its early hydrologic cycle.
Dorval, Alan D
2008-08-15
The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates
ERIC Educational Resources Information Center
Otterbach, Steffen; Sousa-Poza, Alfonso
2010-01-01
This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…
Haasl, Ryan J; Payseur, Bret A
2010-12-01
Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.
Belzer, D.B. ); Serot, D.E. ); Kellogg, M.A. )
1991-03-01
Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)
Baseline estimate of the retained gas volume in Tank 241-C-106
Stewart, C.W.; Chen, G.
1998-06-01
This report presents the results of a study of the retained gas volume in Hanford Tank 241-C-106 (C-106) using the barometric pressure effect method. This estimate is required to establish the baseline conditions for sluicing the waste from C-106 into AY-102, scheduled to begin in the fall of 1998. The barometric pressure effect model is described, and the data reduction and detrending techniques are detailed. Based on the response of the waste level to the larger barometric pressure swings that occurred between October 27, 1997, and March 4, 1998, the best estimate and conservative (99% confidence) retained gas volumes in C-106 are 24 scm (840 scf) and 50 scm (1,770 scf), respectively. This is equivalent to average void fractions of 0.025 and 0.053, respectively.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.
Monitoring and Estimation of Reservoir Water Volume using Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Bhat, Nagaraj; Gouda, Krushna Chandra; Vh, Manumohan; Bhat, Reshma
2015-04-01
Water Reservoirs are the main source of water supply for many settlements as well as power generation. So the water volume and extent of the reservoirs needs to be monitored at regular time intervals for efficient usage as well as to avoid disasters like extreme rainfall events and flood etc. Generally the reservoirs are remotely located so it is difficult to well monitor the water volume and extent. But with growing of Remote sensing and GIS in HPC environment and modeling techniques it is possible to monitor, estimate even predict the reservoir water volumes in advance by using the numerical modeling and satellite Remote sensing data. In this work the monitoring and estimation of the volume of water in the Krishna Raja Sagar(KRS) water reservoir in Karnataka state of India. In this work multispectral images from different sources like Landsat TRS and Digital Elevation Model(DEM) using IRS LISS III (IRS- Indian Remote Sensing, LISS- Linear Imaging Self-Scanning) and ASTER(Advanced Spaceborne Thermal Emission and Reflectance Radiometer) are being used .The methodology involves GIS and image processing techniques such as mosaicing and georeferencing the raw data from satellite, identifying the reservoir water level, segmentation of waterbody using the pixel level analysis. Calculating area and depth per each pixel, the total water volume calculations are done based on the empirical model developed using the past validated data. The water spreaded area calculated by using water indexing is converted in to vector polygon using ArcGIS tools. Water volume obtained by this method is compared with ground based observed values of a reservoir and the comparison well matches for 80% of cases.
A Progressive Black Top Hat Transformation Algorithm for Estimating Valley Volumes from DEM Data
NASA Astrophysics Data System (ADS)
Luo, W.; Pingel, T.; Heo, J.; Howard, A. D.
2013-12-01
The amount of valley incision and valley volume are important parameters in geomorphology and hydrology research, because they are related to the amount erosion (and thus the volume of sediments) and the amount of water needed to create the valley. This is not only the case for terrestrial research but also for planetary research as such figuring out how much water was on Mars. With readily available digital elevation model (DEM) data, the Black Top Hat (BTH) transformation, an image processing technique for extracting dark features on a variable background, has been applied to DEM data to extract valley depth and estimate valley volume. However, previous studies typically use one single structuring element size for extracting the valley feature and one single threshold value for removing noise, resulting in some finer features such as tributaries not being extracted and underestimation of valley volume. Inspired by similar algorithms used in LiDAR data analysis to separate above ground features and bare earth topography, here we propose a progressive BTH (PBTH) transformation algorithm, where the structuring elements size is progressively increased to extract valleys of different orders. In addition, a slope based threshold was introduced to automatically adjust the threshold values for structuring elements with different sizes. Connectivity and shape parameters of the masked regions were used to keep the long linear valleys while removing other smaller non-connected regions. Preliminary application of the PBTH to Grand Canyon and two sites on Mars has produced promising results. More testing and fine-tuning is in progress. The ultimate goal of the project is to apply the algorithm to estimate the volume of valley networks on Mars and the volume of water needed to form the valleys we observe today and thus infer the nature of the hydrologic cycle on early Mars. The project is funded by NASA's Mars Data Analysis program.
Wei, Hua-Liang; Zheng, Ying; Pan, Yi; Coca, Daniel; Li, Liang-Min; Mayhew, J E W; Billings, Stephen A
2009-06-01
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Marshall, B.D.; Neymark, L.A.; Peterman, Z.E.
2003-01-01
Low-temperature calcite and opal record the past seepage of water into open fractures and lithophysal cavities in the unsaturated zone at Yucca Mountain, Nevada, site of a proposed high-level radioactive waste repository. Systematic measurements of calcite and opal coatings in the Exploratory Studies Facility (ESF) tunnel at the proposed repository horizon are used to estimate the volume of calcite at each site of calcite and/or opal deposition. By estimating the volume of water required to precipitate the measured volumes of calcite in the unsaturated zone, seepage rates of 0.005 to 5 liters/year (l/year) are calculated at the median and 95th percentile of the measured volumes, respectively. These seepage rates are at the low end of the range of seepage rates from recent performance assessment (PA) calculations, confirming the conservative nature of the performance assessment. However, the distribution of the calcite and opal coatings indicate that a much larger fraction of the potential waste packages would be contacted by this seepage than is calculated in the performance assessment.
Volume and Mass Estimation of Three-Phase High Power Transformers for Space Applications
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.
2004-01-01
Spacecraft historically have had sub-1kW(sub e), electrical requirements for GN&C, science, and communications: Galileo at 600W(sub e), and Cassini at 900W(sub e), for example. Because most missions have had the same order of magnitude power requirements, the Power Distribution Systems (PDS) use existing, space-qualified technology and are DC. As science payload and mission duration requirements increase, however, the required electrical power increases. Subsequently, this requires a change from a passive energy conversion (solar arrays and batteries) to dynamic (alternator, solar dynamic, etc.), because dynamic conversion has higher thermal and conversion efficiencies, has higher power densities, and scales more readily to higher power levels. Furthermore, increased power requirements and physical distribution lengths are best served with high-voltage, multi-phase AC to maintain distribution efficiency and minimize voltage drops. The generated AC-voltage must be stepped-up (or down) to interface with various subsystems or electrical hardware. Part of the trade-space design for AC distribution systems is volume and mass estimation of high-power transformers. The volume and mass are functions of the power rating, operating frequency, the ambient and allowable temperature rise, the types and amount of heat transfer available, the core material and shape, the required flux density in a core, the maximum current density, etc. McLyman has tabulated the performance of a number of transformers cores and derived a "cookbook" methodology to determine the volume of transformers, whereas Schawrze had derived an empirical method to estimate the mass of single-phase transformers. Based on the work of McLyman and Schwarze, it is the intent herein to derive an empirical solution to the volume and mass estimation of three-phase, laminated EI-core power transformers, having radiated and conducted heat transfer mechanisms available. Estimation of the mounting hardware, connectors
Profiling river surface velocities and volume flow estimation with bistatic UHF RiverSonde radar
Barrick, D.; Teague, C.; Lilleboe, P.; Cheng, R.; Gartner, J.; ,
2003-01-01
From the velocity profiles across the river, estimates of total volume flow for the four methods were calculated based on a knowledge of the bottom depth vs position across the river. It was found that the flow comparisons for the American River were much closer, within 2% of each other among all of the methods. Sources of positional biases and anomalies in the RiverSonde measurement patterns along the river were identified and discussed.
NASA Astrophysics Data System (ADS)
Blockley, Simon P. E.; Bronk Ramsey, C.; Pyle, D. M.
2008-10-01
The role of tephrochronology, as a dating and stratigraphic tool, in precise palaeoclimate and environmental reconstruction, has expanded significantly in recent years. The power of tephrochronology rests on the fact that a tephra layer can stratigraphically link records at the resolution of as little as a few years, and that the most precise age for a particular tephra can be imported into any site where it is found. In order to maximise the potential of tephras for this purpose it is necessary to have the most precise and robustly tested age estimate possible available for key tephras. Given the varying number and quality of dates associated with different tephras it is important to be able to build age models to test competing tephra dates. Recent advances in Bayesian age modelling of dates in sequence have radically extended our ability to build such stratigraphic age models. As an example of the potential here we use Bayesian methods, now widely applied, to examine the dating of some key Late Quaternary tephras from Italy. These are: the Agnano Monte Spina Tephra (AMST), the Neapolitan Yellow Tuff (NYT) and the Agnano Pomici Principali (APP), and all of them have multiple estimates of their true age. Further, we use the Bayesian approaches to generate a revised mixed radiocarbon/varve chronology for the important Lateglacial section of the Lago Grande Monticchio record, as a further illustration of what can be achieved by a Bayesian approach. With all three tephras we were able to produce viable model ages for the tephra, validate the proposed 40Ar/ 39Ar age ranges for these tephras, and provide relatively high precision age models. The results of the Bayesian integration of dating and stratigraphic information, suggest that the current best 95% confidence calendar age estimates for the AMST are 4690-4300 cal BP, the NYT 14320-13900 cal BP, and the APP 12380-12140 cal BP.
Hernández-Vicente, Adrián; Pérez-Isaac, Raúl; Santín-Medeiros, Fernanda; Cristi-Montero, Carlos; Casajús, Jose Antonio; Garatachea, Nuria
2017-01-01
Background The SenseWear Armband (SWA) is a monitor that can be used to estimate energy expenditure (EE); however, it has not been validated in healthy adults. The objective of this paper was to study the validity of the SWA for quantifying EE levels. Methods Twenty-three healthy adults (age 40–55 years, mean: 48±3.42 years) performed different types of standardized physical activity (PA) for 10 minutes (rest, walking at 3 and 5 km·h-1, running at 7 and 9 km·h-1, and sitting/standing at a rate of 30 cycle·min-1). Participants wore the SWA on their right arm, and their EE was measured by indirect calorimetry (IC) the gold standard. Results There were significant differences between the SWA and IC, except in the group that ran at 9 km·h-1 (>9 METs). Bland-Altman analysis showed a BIAS of 1.56 METs (±1.83 METs) and limits of agreement (LOA) at 95% of −2.03 to 5.16 METs. There were indications of heteroscedasticity (R2 =0.03; P<0.05). Analysis of the receiver operating characteristic (ROC) curves showed that the SWA seems to be not sensitive enough to estimate the level of EE at highest intensities. Conclusions The SWA is not as precise in estimating EE as IC, but it could be a useful tool to determine levels of EE at low intensities. PMID:28361062
Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M
2013-08-01
In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
How accurate is the estimation of anthropogenic carbon in the ocean? An evaluation of the ΔC* method
NASA Astrophysics Data System (ADS)
Matsumoto, Katsumi; Gruber, Nicolas
2005-09-01
The ΔC* method of Gruber et al. (1996) is widely used to estimate the distribution of anthropogenic carbon in the ocean; however, as yet, no thorough assessment of its accuracy has been made. Here we provide a critical re-assessment of the method and determine its accuracy by applying it to synthetic data from a global ocean biogeochemistry model, for which we know the "true" anthropogenic CO2 distribution. Our results indicate that the ΔC* method tends to overestimate anthropogenic carbon in relatively young waters but underestimate it in older waters. Main sources of these biases are (1) the time evolution of the air-sea CO2 disequilibrium, which is not properly accounted for in the ΔC* method, (2) a pCFC ventilation age bias that arises from mixing, and (3) errors in identifying the different end-member water types. We largely support the findings of Hall et al. (2004), who have also identified the first two bias sources. An extrapolation of the errors that we quantified on a number of representative isopycnals to the global ocean suggests a positive bias of about 7% in the ΔC*-derived global anthropogenic CO2 inventory. The magnitude of this bias is within the previously estimated 20% uncertainty of the method, but regional biases can be larger. Finally, we propose two improvements to the ΔC* method in order to account for the evolution of air-sea CO2 disequilibrium and the ventilation age mixing bias.
Barak, C; Leviatan, Y; Inbar, G F; Hoekstein, K N
1992-09-01
Using the electrical impedance measurement technique to investigate stroke volume estimation, three models of the ventricle were simulated. A four-electrode impedance catheter was used; two electrodes to set up an electric field in the model and the other two to measure the potential difference. A new approach, itself an application of the quasi-static case of a method used to solve electromagnetic field problems, was used to solve the electric field in the model. The behaviour of the estimation is examined with respect to the electrode configuration on the catheter and to catheter location with respect to the ventricle walls. Cardiac stroke volume estimation was found to be robust to catheter location generating a 10 per cent error for an offset of 40 per cent of the catheter from the chamber axis and rotation of 20 degrees with respect to the axis. The electrode configuration has a dominant effect on the sensitivity and accuracy of the estimation. Certain configurations gave high accuracy, whereas in others high sensitivity was found with lower accuracy. This led to the conclusion that the electrode configuration should be carefully chosen according to the desired criteria.
NASA Astrophysics Data System (ADS)
Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás
2016-12-01
Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we
A volume law for specification of linear channel storage for estimation of large floods
NASA Astrophysics Data System (ADS)
Zhang, Shangyou; Cordery, Ian; Sharma, Ashish
2000-02-01
A method of estimating large floods using a linear storage-routing approach is presented. The differences between the proposed approach and those traditionally used are (1) that the flood producing properties of basins are represented by a linear system, (2) the storage parameters of the distributed model are determined using a volume law which, unlike other storage-routing models, accounts for the distribution of storage in natural basins, and (3) the basin outflow hydrograph is determined analytically and expressed in a succinct mathematical form. The single model parameter is estimated from observed data without direct fitting, unlike most traditionally used methods. The model was tested by showing it could reproduce observed large floods on a number of basins. This paper compares the proposed approach with a traditionally used storage routing approach using observed flood data from the Hacking River basin in New South Wales, Australia. Results confirm the usefulness of the proposed approach for estimation of large floods.
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
Recovery of metal oxides from fly ash. Volume 2. Engineering data and cost estimates. Final report
Wilder, R.F.; Barrett, P.J.; Henslee, L.W. Jr.
1984-06-01
An engineering, cost and financial evaluation study was carried out for a conceptual commercial plant to process fly ash into marketable metal oxides by the direct HCl acid leach process. The proposed plant site was adjacent to the TVA Kingston, Tennessee power plant and was sized to process 1 million tons of ash (dry basis) per year. The capital cost requirements for the HCl direct acid leach (DAL) optimized process plant were estimated to be $244,390,000. Based upon the reported Kingston plant fly ash analysis and extractability, the conceptual commercial plant would annually produce about 158,000 TPY of alumina, 102,000 TPY of ferric oxide, 46,000 TPY of gypsum, 81,000 TPY of alkali sulfate salts, 866,000 TPY of spent fly ash and 1,940,000 kWh of excess cogeneration power. Potential long term average revenues were projected to be $126,400,000 per year which would indicate a commercial project's economics may be quite adequate. Volume 1 of this study report presents the investment and operating cost data, revenue considerations and an evaluation of profitability. Volume 2 presents the engineering data and capital cost estimates and Volume 3 presents the commercial facility design criteria. 16 references, 3 figures, 2 tables.
A statistical method to estimate outflow volume in case of levee breach due to overtopping
NASA Astrophysics Data System (ADS)
Brandimarte, Luigia; Martina, Mario; Dottori, Francesco; Mazzoleni, Maurizio
2015-04-01
The aim of this study is to propose a statistical method to assess the outflowing water volume through a levee breach, due to overtopping, in case of three different types of grass cover quality. The first step in the proposed methodology is the definition of the reliability function, a the relation between loading and resistance conditions on the levee system, in case of overtopping. Secondly, the fragility curve, which relates the probability of failure with loading condition over the levee system, is estimated having defined the stochastic variables in the reliability function. Thus, different fragility curves are assessed in case of different scenarios of grass cover quality. Then, a levee breach model is implemented and combined with a 1D hydrodynamic model in order to assess the outflow hydrograph given the water level in the main channel and stochastic values of the breach width. Finally, the water volume is estimated as a combination of the probability density function of the breach width and levee failure. The case study is located in the in 98km-braided reach of Po River, Italy, between the cross-sections of Cremona and Borgoforte. The analysis showed how different counter measures, different grass cover quality in this case, can reduce the probability of failure of the levee system. In particular, for a given values of breach width good levee cover qualities can significantly reduce the outflowing water volume, compared to bad cover qualities, inducing a consequent lower flood risk within the flood-prone area.
NASA Astrophysics Data System (ADS)
Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.
2016-06-01
When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.
Estimating retained gas volumes in the Hanford tanks using waste level measurements
Whitney, P.D.; Chen, G.; Gauglitz, P.A.; Meyer, P.A.; Miller, N.E.
1997-09-01
The Hanford site is home to 177 large, underground nuclear waste storage tanks. Safety and environmental concerns surround these tanks and their contents. One such concern is the propensity for the waste in these tanks to generate and trap flammable gases. This report focuses on understanding and improving the quality of retained gas volume estimates derived from tank waste level measurements. While direct measurements of gas volume are available for a small number of the Hanford tanks, the increasingly wide availability of tank waste level measurements provides an opportunity for less expensive (than direct gas volume measurement) assessment of gas hazard for the Hanford tanks. Retained gas in the tank waste is inferred from level measurements -- either long-term increase in the tank waste level, or fluctuations in tank waste level with atmospheric pressure changes. This report concentrates on the latter phenomena. As atmospheric pressure increases, the pressure on the gas in the tank waste increases, resulting in a level decrease (as long as the tank waste is {open_quotes}soft{close_quotes} enough). Tanks with waste levels exhibiting fluctuations inversely correlated with atmospheric pressure fluctuations were catalogued in an earlier study. Additionally, models incorporating ideal-gas law behavior and waste material properties have been proposed. These models explicitly relate the retained gas volume in the tank with the magnitude of the waste level fluctuations, dL/dP. This report describes how these models compare with the tank waste level measurements.
Astrometric telescope facility. Preliminary systems definition study. Volume 3: Cost estimate
NASA Technical Reports Server (NTRS)
Sobeck, Charlie (Editor)
1987-01-01
The results of the Astrometric Telescope Facility (ATF) Preliminary System Definition Study conducted in the period between March and September 1986 are described. The main body of the report consists primarily of the charts presented at the study final review which was held at NASA Ames Research Center on July 30 and 31, 1986. The charts have been revised to reflect the results of that review. Explanations for the charts are provided on the adjoining pages where required. Note that charts which have been changed or added since the review are dated 10/1/86; unchanged charts carry the review date 7/30/86. In addition, a narrative summary is presented of the study results and two appendices. The first appendix is a copy of the ATF Characteristics and Requirements Document generated as part of the study. The second appendix shows the inputs to the Space Station Mission Requirements Data Base submitted in May 1986. The report is issued in three volumes. Volume 1 contains an executive summary of the ATF mission, strawman design, and study results. Volume 2 contains the detailed study information. Volume 3 has the ATF cost estimate, and will have limited distribution.
Glass Property Data and Models for Estimating High-Level Waste Glass Volume
Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.
2009-10-05
This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.
Haapea, Marianne; Veijola, Juha; Tanskanen, Päivikki; Jääskeläinen, Erika; Isohanni, Matti; Miettunen, Jouko
2011-12-30
Low participation is a potential source of bias in population-based studies. This article presents use of inverse probability weighting (IPW) in adjusting for non-participation in estimation of brain volumes among subjects with schizophrenia. Altogether 101 schizophrenia subjects and 187 non-psychotic comparison subjects belonging to the Northern Finland 1966 Birth Cohort were invited to participate in a field study during 1999-2001. Volumes of grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) were compared between the 54 participating schizophrenia subjects and 100 comparison subjects. IPW by illness-related auxiliary variables did not affect the estimated GM and WM mean volumes, but increased the estimated CSF mean volume in schizophrenia subjects. When adjusted for intracranial volume and family history of psychosis, IPW led to smaller estimated GM and WM mean volumes. Especially IPW by a disability pension and a higher amount of hospitalisation due to psychosis had effect on estimated mean brain volumes. The IPW method can be used to improve estimates affected by non-participation by reflecting the true differences in the target population.
Vargas, Alfredo; Krivokapic, Itana; Hauser, Andreas; Lawson Daku, Latévi Max
2013-03-21
We report a detailed DFT study of the energetic and structural properties of the spin-crossover Co(ii) complex [Co(tpy)(2)](2+) (tpy = 2,2':6',2''-terpyridine) in the low-spin (LS) and the high-spin (HS) states, using several generalized gradient approximation and hybrid functionals. In either spin-state, the results obtained with the functionals are consistent with one another and in good agreement with available experimental data. Although the different functionals correctly predict the LS state as the electronic ground state of [Co(tpy)(2)](2+), they give estimates of the HS-LS zero-point energy difference which strongly depend on the functional used. This dependency on the functional was also reported for the DFT estimates of the zero-point energy difference in the HS complex [Co(bpy)(3)](2+) (bpy = 2,2'-bipyridine) [A. Vargas, A. Hauser and L. M. Lawson Daku, J. Chem. Theory Comput., 2009, 5, 97]. The comparison of the and estimates showed that all functionals correctly predict an increase of the zero-point energy difference upon the bpy → tpy ligand substitution, which furthermore weakly depends on the functionals, amounting to . From these results and basic thermodynamic considerations, we establish that, despite their limitations, current DFT methods can be applied to the accurate determination of the spin-state energetics of complexes of a transition metal ion, or of these complexes in different environments, provided that the spin-state energetics is accurately known in one case. Thus, making use of the availability of a highly accurate ab initio estimate of the HS-LS energy difference in the complex [Co(NCH)(6)](2+) [L. M. Lawson Daku, F. Aquilante, T. W. Robinson and A. Hauser, J. Chem. Theory Comput., 2012, 8, 4216], we obtain for [Co(tpy)(2)](2+) and [Co(bpy)(3)](2+) best estimates of and , in good agreement with the known magnetic behaviour of the two complexes.
Gingerich, W.H.; Pityer, R.A.; Rach, J.J.
1987-01-01
Total blood volume and relative blood volumes in selected tissues were determined in non-anesthetized, confined rainbow trout by using super(51)Cr-labelled trout erythrocytes as a vascular space marker. Mean total blood volume was estimated to be 4.09 plus or minus 0.55 ml/100 g, or about 75% of that estimated with the commonly used plasma space marker Evans blue dye. Relative tissue blood volumes were greatest in highly perfused tissues such as kidney, gills, brain and liver and least in mosaic muscle. Estimates of tissue vascular spaces, made using radiolabelled erythrocytes, were only 25-50% of those based on plasma space markers. The consistently smaller vascular volumes obtained with labelled erythrocytes could be explained by assuming that commonly used plasma space markers diffuse from the vascular compartment.
Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César
2012-02-01
The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.
Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C
2015-06-15
Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not
Ertekin, Tolga; Acer, Niyazi; Turgut, Ahmet T; Aycan, Kenan; Ozçelik, Ozlem; Turgut, Mehmet
2011-03-01
Stereological techniques using point counting and planimetry have been used to estimate pituitary gland volume. However, many studies have estimated pituitary gland volume by the mathematical approach the elliptic formula. The objective of the current study was to determine pituitary gland volume using stereological methods and elliptic formula on magnetic resonance imaging (MRI). In this study, pituitary gland volumes were estimated in a total of 28 subjects (22 females, 6 males,) who were free of any pituitary or neurological symptoms and signs. The mean ± SD pituitary gland volumes for the point counting, planimetry and elliptic formulae groups were 582.14 ± 140.16 mm³, 610.08 ± 133.17 mm³, and 432.82 ± 147.38 mm³, respectively. The mean CE for the pituitary gland volume estimates derived from the point counting technique was 8.07%. No significant difference was found between point counting and planimetric methods for the pituitary gland volume (P > 0.05). In addition, there was a 26.14 and 29.71% underestimation of pituitary volume as measured by the elliptic formula compared to the point counting and planimetric techniques, respectively. From these results, it can be concluded that stereological techniques are unbiased, efficient and reliable methods and are ideally suitable for in vivo examination of MRI data for pituitary gland volume estimation. Hence, we suggest that estimating pituitary gland volume using MRI study and stereology may be clinically relevant for pituitary surgeons for the investigation of the structure and function of the pituitary gland.
Herzog, Mark; Ackerman, Josh; Eagles-Smith, Collin A.; Hartman, Christopher
2016-01-01
In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster’s tern (Sterna forsteri). Egg densities (g/cm3) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6–13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .
Herzog, Mark P; Ackerman, Joshua T; Eagles-Smith, Collin A; Hartman, C Alex
2016-05-01
In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster's tern (Sterna forsteri). Egg densities (g/cm(3)) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6-13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Smith, S. Jerrod
2013-01-01
From the 1890s through the 1970s the Picher mining district in northeastern Ottawa County, Oklahoma, was the site of mining and processing of lead and zinc ore. When mining ceased in about 1979, as much as 165–300 million tons of mine tailings, locally referred to as “chat,” remained in the Picher mining district. Since 1979, some chat piles have been mined for aggregate materials and have decreased in volume and mass. Currently (2013), the land surface in the Picher mining district is covered by thousands of acres of chat, much of which remains on Indian trust land owned by allottees. The Bureau of Indian Affairs manages these allotted lands and oversees the sale and removal of chat from these properties. To help the Bureau of Indian Affairs better manage the sale and removal of chat, the U.S. Geological Survey, in cooperation with the Bureau of Indian Affairs, estimated the 2005 and 2010 volumes and masses of selected chat piles remaining on allotted lands in the Picher mining district. The U.S. Geological Survey also estimated the changes in volume and mass of these chat piles for the period 2005 through 2010. The 2005 and 2010 chat-pile volume and mass estimates were computed for 34 selected chat piles on 16 properties in the study area. All computations of volume and mass were performed on individual chat piles and on groups of chat piles in the same property. The Sooner property had the greatest estimated volume (4.644 million cubic yards) and mass (5.253 ± 0.473 million tons) of chat in 2010. Five of the selected properties (Sooner, Western, Lawyers, Skelton, and St. Joe) contained estimated chat volumes exceeding 1 million cubic yards and estimated chat masses exceeding 1 million tons in 2010. Four of the selected properties (Lucky Bill Humbah, Ta Mee Heh, Bird Dog, and St. Louis No. 6) contained estimated chat volumes of less than 0.1 million cubic yards and estimated chat masses of less than 0.1 million tons in 2010. The total volume of all
Estimating Volume, Biomass, and Carbon in Hedmark County, Norway Using a Profiling LiDAR
NASA Technical Reports Server (NTRS)
Nelson, Ross; Naesset, Erik; Gobakken, T.; Gregoire, T.; Stahl, G.
2009-01-01
A profiling airborne LiDAR is used to estimate the forest resources of Hedmark County, Norway, a 27390 square kilometer area in southeastern Norway on the Swedish border. One hundred five profiling flight lines totaling 9166 km were flown over the entire county; east-west. The lines, spaced 3 km apart north-south, duplicate the systematic pattern of the Norwegian Forest Inventory (NFI) ground plot arrangement, enabling the profiler to transit 1290 circular, 250 square meter fixed-area NFI ground plots while collecting the systematic LiDAR sample. Seven hundred sixty-three plots of the 1290 plots were overflown within 17.8 m of plot center. Laser measurements of canopy height and crown density are extracted along fixed-length, 17.8 m segments closest to the center of the ground plot and related to basal area, timber volume and above- and belowground dry biomass. Linear, nonstratified equations that estimate ground-measured total aboveground dry biomass report an R(sup 2) = 0.63, with an regression RMSE = 35.2 t/ha. Nonstratified model results for the other biomass components, volume, and basal area are similar, with R(sup 2) values for all models ranging from 0.58 (belowground biomass, RMSE = 8.6 t/ha) to 0.63. Consistently, the most useful single profiling LiDAR variable is quadratic mean canopy height, h (sup bar)(sub qa). Two-variable models typically include h (sup bar)(sub qa) or mean canopy height, h(sup bar)(sub a), with a canopy density or a canopy height standard deviation measure. Stratification by productivity class did not improve the nonstratified models, nor did stratification by pine/spruce/hardwood. County-wide profiling LiDAR estimates are reported, by land cover type, and compared to NFI estimates.
Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.
Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L
2015-03-01
Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.
Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.
2011-01-01
The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.
Sofka, Michal; Zhang, Jingdan; Good, Sara; Zhou, S Kevin; Comaniciu, Dorin
2014-05-01
Routine ultrasound exam in the second and third trimesters of pregnancy involves manually measuring fetal head and brain structures in 2-D scans. The procedure requires a sonographer to find the standardized visualization planes with a probe and manually place measurement calipers on the structures of interest. The process is tedious, time consuming, and introduces user variability into the measurements. This paper proposes an automatic fetal head and brain (AFHB) system for automatically measuring anatomical structures from 3-D ultrasound volumes. The system searches the 3-D volume in a hierarchy of resolutions and by focusing on regions that are likely to be the measured anatomy. The output is a standardized visualization of the plane with correct orientation and centering as well as the biometric measurement of the anatomy. The system is based on a novel framework for detecting multiple structures in 3-D volumes. Since a joint model is difficult to obtain in most practical situations, the structures are detected in a sequence, one-by-one. The detection relies on Sequential Estimation techniques, frequently applied to visual tracking. The interdependence of structure poses and strong prior information embedded in our domain yields faster and more accurate results than detecting the objects individually. The posterior distribution of the structure pose is approximated at each step by sequential Monte Carlo. The samples are propagated within the sequence across multiple structures and hierarchical levels. The probabilistic model helps solve many challenges present in the ultrasound images of the fetus such as speckle noise, signal drop-out, shadows caused by bones, and appearance variations caused by the differences in the fetus gestational age. This is possible by discriminative learning on an extensive database of scans comprising more than two thousand volumes and more than thirteen thousand annotations. The average difference between ground truth and automatic
Estimating Wood Volume for Pinus Brutia Trees in Forest Stands from QUICKBIRD-2 Imagery
NASA Astrophysics Data System (ADS)
Patias, Petros; Stournara, Panagiota
2016-06-01
Knowledge of forest parameters, such as wood volume, is required for a sustainable forest management. Collecting such information in the field is laborious and even not feasible in inaccessible areas. In this study, tree wood volume is estimated utilizing remote sensing techniques, which can facilitate the extraction of relevant information. The study area is the University Forest of Taxiarchis, which is located in central Chalkidiki, Northern Greece and covers an area of 58km2. The tree species under study is the conifer evergreen species P. brutia (Calabrian pine). Three plot surfaces of 10m radius were used. VHR Quickbird-2 images are used in combination with an allometric relationship connecting the Tree Crown with the Diameter at breast height (Dbh), and a volume table developed for Greece. The overall methodology is based on individual tree crown delineation, based on (a) the marker-controlled watershed segmentation approach and (b) the GEographic Object-Based Image Analysis approach. The aim of the first approach is to extract separate segments each of them including a single tree and eventual lower vegetation, shadows, etc. The aim of the second approach is to detect and remove the "noisy" background. In the application of the first approach, the Blue, Green, Red, Infrared and PCA-1 bands are tested separately. In the application of the second approach, NDVI and image brightness thresholds are utilized. The achieved results are evaluated against field plot data. Their observed difference are between -5% to +10%.
Improved estimates for the role of grey matter volume and GABA in bistable perception.
Sandberg, Kristian; Blicher, Jakob Udby; Del Pin, Simon Hviid; Andersen, Lau Møller; Rees, Geraint; Kanai, Ryota
2016-10-01
Across a century or more, ambiguous stimuli have been studied scientifically because they provide a method for studying the internal mechanisms of the brain while ensuring an unchanging external stimulus. In recent years, several studies have reported correlations between perceptual dynamics during bistable perception and particular brain characteristics such as the grey matter volume of areas in the superior parietal lobule (SPL) and the relative GABA concentration in the occipital lobe. Here, we attempt to replicate previous results using similar paradigms to those used in the studies first reporting the correlations. Using the original findings as priors for Bayesian analyses, we found strong support for the correlation between structure-from-motion percept duration and anterior SPL grey matter volume. Correlations between percept duration and other parietal areas as well as occipital GABA, however, were not directly replicated or appeared less strong than previous studies suggested. Inspection of the posterior distributions (current "best guess" based on new data given old data as prior) revealed that several original findings may reflect true relationships although no direct evidence was found in support of them in the current sample. Additionally, we found that multiple regression models based on grey matter volume at 2-3 parietal locations (but not including GABA) were the best predictors of percept duration, explaining approximately 35% of the inter-individual variance. Taken together, our results provide new estimates of correlation strengths, generally increasing confidence in the role of the aSPL while decreasing confidence in some of the other relationships.
Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng
2015-01-01
Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433
Estimation of Residual Peritoneal Volume Using Technetium-99m Sulfur Colloid Scintigraphy.
Katopodis, Konstantinos P; Fotopoulos, Andrew D; Balafa, Olga C; Tsiouris, Spyridon Th; Triandou, Eleni G; Al-Bokharhli, Jichad B; Kitsos, Athanasios C; Dounousi, Evagelia C; Siamopoulos, Konstantinos C
2015-01-01
Residual peritoneal volume (RPV) may contribute in the development of ultrafiltration failure in patients with normal transcapillary ultrafiltration. The aim of this study was to estimate the RPV using intraperitoneal technetium-99m Sulfur Colloid (Tc). Twenty patients on peritoneal dialysis were studied. RPV was estimated by: 1) intraperitoneal instillation of Tc (RPV-Tc) and 2) classic Twardowski calculations using endogenous solutes, such as urea (RPV-u), creatinine (RPV-cr), and albumin (RPV-alb). Each method's reproducibility was assessed in a subgroup of patients in two consecutive measurements 48 h apart. Both methods displayed reproducibility (r = 0.93, p = 0.001 for RPVTc and r = 0.90, p = 0.001 for RPV-alb) between days 1 and 2, respectively. We found a statistically significant difference between RPV-Tc and RPV-cr measurements (347.3 ± 116.7 vs. 450.0 ± 67.8 ml; p =0.001) and RPV-u (515.5 ± 49.4 ml; p < 0.001), but not with RPV-alb (400.1 ± 88.2 ml; p = 0.308). A good correlation was observed only between RPV-Tc and RPV-alb (p < 0.001). The Tc method can estimate the RPV as efficiently as the high molecular weight endogenous solute measurement method. It can also provide an imaging estimate of the intraperitoneal distribution of RPV.
Wille, Marie-Luise; Langton, Christian M
2016-02-01
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R(2)=99.9% and R(2)=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment.
Volume estimation of tonsil phantoms using an oral camera with 3D imaging
Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh
2016-01-01
Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667
Limitations of Stroke Volume Estimation by Non-Invasive Blood Pressure Monitoring in Hypergravity
2015-01-01
Background Altitude and gravity changes during aeromedical evacuations induce exacerbated cardiovascular responses in unstable patients. Non-invasive cardiac output monitoring is difficult to perform in this environment with limited access to the patient. We evaluated the feasibility and accuracy of stroke volume estimation by finger photoplethysmography (SVp) in hypergravity. Methods Finger arterial blood pressure (ABP) waveforms were recorded continuously in ten healthy subjects before, during and after exposure to +Gz accelerations in a human centrifuge. The protocol consisted of a 2-min and 8-min exposure up to +4 Gz. SVp was computed from ABP using Liljestrand, systolic area, and Windkessel algorithms, and compared with reference values measured by echocardiography (SVe) before and after the centrifuge runs. Results The ABP signal could be used in 83.3% of cases. After calibration with echocardiography, SVp changes did not differ from SVe and values were linearly correlated (p<0.001). The three algorithms gave comparable SVp. Reproducibility between SVp and SVe was the best with the systolic area algorithm (limits of agreement −20.5 and +38.3 ml). Conclusions Non-invasive ABP photoplethysmographic monitoring is an interesting technique to estimate relative stroke volume changes in moderate and sustained hypergravity. This method may aid physicians for aeronautic patient monitoring. PMID:25798613
Harbers, Jasper V; Huijbregts, Mark A J; Posthuma, Leo; Van de Meent, Dik
2006-03-01
Although many chemicals are in use, the environmental impacts of only a few have been established, usually on per-chemical basis. Uncertainty remains about the overall impact of chemicals. This paper estimates combined toxic pressure on coastal North Sea ecosystems from 343 high-production-volume chemicals used within the catchment of rivers Rhine, Meuse, and Scheldt. Multimedia fate modeling and species sensitivity distribution-based effects estimation are applied. Calculations start from production volumes and emission rates and use physicochemical substance properties and aquatic ecotoxicity data. Parameter uncertainty is addressed by Monte Carlo simulations. Results suggest that the procedure is technically feasible. Combined toxic pressure of all 343 chemicals in coastal North Seawater is 0.025 (2.5% of the species are exposed to concentration levels above EC50 values), with a wide confidence interval of nearly 0-1. This uncertainty appears to be largely due to uncertainties in interspecies variances of aquatic toxicities and, to a lesser extent, to uncertainties in emissions and degradation rates. Due to these uncertainties, the results support gross ranking of chemicals in categories: negligible and possibly relevant contributions only. With 95% confidence, 283 of the 343 chemicals (83%) contribute negligibly (less than 0.1%) to overall toxic pressure, and only 60 (17%) need further consideration.
A novel optical method for estimating the near-wall volume fraction in granular flows
NASA Astrophysics Data System (ADS)
Sarno, Luca; Nicolina Papa, Maria; Carleo, Luigi; Tai, Yih-Chin
2016-04-01
Geophysical phenomena, such as debris flows, pyroclastic flows and rock avalanches, involve the rapid flow of granular mixtures. Today the dynamics of these flows is far from being deeply understood, due to their huge complexity compared to clear water or monophasic fluids. To this regard, physical models at laboratory scale represent important tools for understanding the still unclear properties of granular flows and their constitutive laws, under simplified experimental conditions. Beside the velocity and the shear rate, the volume fraction is also strongly interlinked with the rheology of granular materials. Yet, a reliable estimation of this quantity is not easy through non-invasive techniques. In this work a novel cost-effective optical method for estimating the near-wall volume fraction is presented and, then, applied to a laboratory study on steady-state granular flows. A preliminary numerical investigation, through Monte-Carlo generations of grain distributions under controlled illumination conditions, allowed to find the stochastic relationship between the near-wall volume fraction, c3D, and a measurable quantity (the two-dimensional volume fraction), c2D, obtainable through an appropriate binarization of gray-scale images captured by a camera placed in front of the transparent boundary. Such a relation can be well described by c3D = aexp(bc2D), with parameters only depending on the angle of incidence of light, ζ. An experimental validation of the proposed approach is carried out on dispersions of white plastic grains, immersed in various ambient fluids. The mixture, confined in a box with a transparent window, is illuminated by a flickering-free LED lamp, placed so as to form a given ζ with the measuring surface, and is photographed by a camera, placed in front of the same window. The predicted exponential law is found to be in sound agreement with experiments for a wide range of ζ (10° <ζ<45°). The technique is, then, applied to steady-state dry
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion
Yip, Jia Miin; Mouratova, Naila; Jeffery, Rebecca M; Veitch, Daisy E; Woodman, Richard J; Dean, Nicola R
2012-02-01
Preoperative assessment of breast volume could contribute significantly to the planning of breast-related procedures. The availability of 3D scanning technology provides us with an innovative method for doing this. We performed this study to compare measurements by this technology with breast volume measurement by water displacement. A total of 30 patients undergoing 39 mastectomies were recruited from our center. The volume of each patient's breast(s) was determined with a preoperative 3D laser scan. The volume of the mastectomy specimen was then measured in the operating theater by water displacement. There was a strong linear association between breast volumes measured using the 2 different methods when using a Pearson correlation (r = 0.95, P < 0.001). The mastectomy mean volume was defined by the equation: mastectomy mean volume = (scan mean volume × 1.03) -70.6. This close correlation validates the Cyberware WBX Scanner as a tool for assessment of breast volume.
NASA Astrophysics Data System (ADS)
Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed
2017-02-01
A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.
Tomiyama, Yuuki; Yoshinaga, Keiichiro; Fujii, Satoshi; Ochi, Noriki; Inoue, Mamiko; Nishida, Mutumi; Aziki, Kumi; Horie, Tatsunori; Katoh, Chietsugu; Tamaki, Nagara
2015-01-01
Increasing vascular diameter and attenuated vascular elasticity may be reliable markers for atherosclerotic risk assessment. However, previous measurements have been complex, operator-dependent or invasive. Recently, we developed a new automated oscillometric method to measure a brachial artery's estimated area (eA) and volume elastic modulus (VE). The aim of this study was to investigate the reliability of new automated oscillometric measurement of eA and VE. Rest eA and VE were measured using the recently developed automated detector with the oscillometric method. eA was estimated using pressure/volume curves and VE was defined as follows (VE=Δ pressure/ (100 × Δ area/area) mm Hg/%). Sixteen volunteers (age 35.2±13.1 years) underwent the oscillometric measurements and brachial ultrasound at rest and under nitroglycerin (NTG) administration. Oscillometric measurement was performed twice on different days. The rest eA correlated with ultrasound-measured brachial artery area (r=0.77, P<0.001). Rest eA and VE measurement showed good reproducibility (eA: intraclass correlation coefficient (ICC)=0.88, VE: ICC=0.78). Under NTG stress, eA was significantly increased (12.3±3.0 vs. 17.1±4.6 mm2, P<0.001), and this was similar to the case with ultrasound evaluation (4.46±0.72 vs. 4.73±0.75 mm, P<0.001). VE was also decreased (0.81±0.16 vs. 0.65±0.11 mm Hg/%, P<0.001) after NTG. Cross-sectional vascular area calculated using this automated oscillometric measurement correlated with ultrasound measurement and showed good reproducibility. Therefore, this is a reliable approach and this modality may have practical application to automatically assess muscular artery diameter and elasticity in clinical or epidemiological settings. PMID:25693851
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Using LiDAR to Estimate Surface Erosion Volumes within the Post-storm 2012 Bagley Fire
NASA Astrophysics Data System (ADS)
Mikulovsky, R. P.; De La Fuente, J. A.; Mondry, Z. J.
2014-12-01
The total post-storm 2012 Bagley fire sediment budget of the Squaw Creek watershed in the Shasta-Trinity National Forest was estimated using many methods. A portion of the budget was quantitatively estimated using LiDAR. Simple workflows were designed to estimate the eroded volume's of debris slides, fill failures, gullies, altered channels and streams. LiDAR was also used to estimate depositional volumes. Thorough manual mapping of large erosional features using the ArcGIS 10.1 Geographic Information System was required as these mapped features determined the eroded volume boundaries in 3D space. The 3D pre-erosional surface for each mapped feature was interpolated based on the boundary elevations. A surface difference calculation was run using the estimated pre-erosional surfaces and LiDAR surfaces to determine volume of sediment potentially delivered into the stream system. In addition, cross sections of altered channels and streams were taken using stratified random selection based on channel gradient and stream order respectively. The original pre-storm surfaces of channel features were estimated using the cross sections and erosion depth criteria. Open source software Inkscape was used to estimate cross sectional areas for randomly selected channel features and then averaged for each channel gradient and stream order classes. The average areas were then multiplied by the length of each class to estimate total eroded altered channel and stream volume. Finally, reservoir and in-channel depositional volumes were estimated by mapping channel forms and generating specific reservoir elevation zones associated with depositional events. The in-channel areas and zones within the reservoir were multiplied by estimated and field observed sediment thicknesses to attain a best guess sediment volume. In channel estimates included re-occupying stream channel cross sections established before the fire. Once volumes were calculated, other erosion processes of the Bagley
MCNP ESTIMATE OF THE SAMPLED VOLUME IN A NON-DESTRUCTIVE IN SITU SOIL CARBON ANALYSIS.
WIELOPOLSKI, L.; DIOSZEGI, I.; MITRA, S.
2004-05-03
Global warming, promoted by anthropogenic CO{sub 2} emission into the atmosphere, is partially mitigated by the photosynthesis processes of the terrestrial echo systems that act as atmospheric CO{sub 2} scrubbers and sequester carbon in soil. Switching from till to no till soils management practices in agriculture further augments this process. Carbon sequestration is also advanced by putting forward a carbon ''credit'' system whereby these can be traded between CO{sub 2} producers and sequesters. Implementation of carbon ''credit'' trade will be further promulgated by recent development of a non-destructive in situ carbon monitoring system based on inelastic neutron scattering (INS). Volumes and depth distributions defined by the 0.1, 1.0, 10, 50, and 90 percent neutron isofluxes, from a point source located at either 5 or 30 cm above the surface, were estimated using Monte Carlo calculations.
Water volume estimates of the Greenland Perennial Firn Aquifer from in situ measurements
NASA Astrophysics Data System (ADS)
Koenig, L.; Miege, C.; Forster, R. R.; Brucker, L.
2013-12-01
Improving our understanding of the complex Greenland hydrologic system is necessary for assessing change across the Greenland Ice Sheet and its contribution to sea level rise (SLR). A new component of the Greenland hydrologic system, a Perennial Firn Aquifer (PFA), was recently discovered in April 2011. The PFA represents a large storage of liquid water within the Greenland Ice Sheet with an area of 70,000 × 10,000 km2 simulated by the RACMO2/GR regional climate model which closely follows airborne radar-derived mapping (Forster et al., in press). The average top surface depth of the PFA as detected by radar is 23 m. In April 2013, our team drilled through the PFA for the first time to gain an understanding of firn structure constraining the PFA, to estimate the water volume within the PFA, and to measure PFA temperatures and densities. At our drill site in Southeast Greenland (~100 km Northwest of Kulusuk), water fills or partially fills the available firn pore space from depths of ~12 to 37 m. The temperature within the PFA depths is constant at 0.1 × 0.1° C while the 12 m of seasonally dry firn above the PFA has a temperature profile dominated by surface temperature forcing. Near the bottom of the PFA water completely fills available pore space as the firn is compressed to ice entrapping water filled bubbles, as opposed to air filled bubbles, which then start to refreeze. A PFA maximum density is reached as the water filling the pore space, increasing density, begins refreezing back into ice at a lower density. We define this depth as the pore water refreeze depth and use this depth as the bottom of the PFA to calculate volume. It is certain, however that a small amount of water does exist below this depth, which we do not account for. The density profile obtained from the ACT11B firn core, the closest seasonally dry firn core, is compared to both gravitational densities and high resolution densities derived from a neutron density probe at the PFA site. The
D'Alessandro, Brian; Dhawan, Atam P
2012-11-01
Subsurface information about skin lesions, such as the blood volume beneath the lesion, is important for the analysis of lesion severity towards early detection of skin cancer such as malignant melanoma. Depth information can be obtained from diffuse reflectance based multispectral transillumination images of the skin. An inverse volume reconstruction method is presented which uses a genetic algorithm optimization procedure with a novel population initialization routine and nudge operator based on the multispectral images to reconstruct the melanin and blood layer volume components. Forward model evaluation for fitness calculation is performed using a parallel processing voxel-based Monte Carlo simulation of light in skin. Reconstruction results for simulated lesions show excellent volume accuracy. Preliminary validation is also done using a set of 14 clinical lesions, categorized into lesion severity by an expert dermatologist. Using two features, the average blood layer thickness and the ratio of blood volume to total lesion volume, the lesions can be classified into mild and moderate/severe classes with 100% accuracy. The method therefore has excellent potential for detection and analysis of pre-malignant lesions.
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Loubère, Raphaël
2016-08-01
In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order
Harrow, Lisa; Espie, Colin
2010-03-01
The 'quarter-hour rule' (QHR) instructs the person with insomnia to get out of bed after 15 min of wakefulness and return to bed only when sleep feels imminent. Recent research has identified that sleep can be significantly improved using this simple intervention (Malaffo and Espie, Sleep, 27(s), 2004, 280; Sleep, 29 (s), 2006, 257), but successful implementation depends on estimating time without clock monitoring, and the insomnia literature indicates poor time perception is a maintaining factor in primary insomnia (Harvey, Behav. Res. Ther., 40, 2002, 869). This study expands upon previous research with the aim of identifying whether people with insomnia can accurately perceive a 15-min interval during the sleep-onset period, and therefore successfully implements the QHR. A mixed models anova design was applied with between-participants factor of group (insomnia versus good sleepers) and within-participants factor of context (night versus day). Results indicated no differences between groups and contexts on time estimation tasks. This was despite an increase in arousal in the night context for both groups, and tentative support for the impact of arousal in inducing underestimations of time. These results provide promising support for the successful application of the QHR in people with insomnia. The results are discussed in terms of whether the design employed successfully accessed the processes that are involved in distorting time perception in insomnia. Suggestions for future research are provided and limitations of the current study discussed.
NASA Astrophysics Data System (ADS)
Craig, Norman C.; Demaison, Jean; Groner, Peter; Rudolph, Heinz Dieter; Vogt, Natalja
2015-06-01
An accurate equilibrium structure of trans-hexatriene has been determined by the mixed estimation method with rotational constants from 8 deuterium and carbon isotopologues and high-level quantum chemical calculations. In the mixed estimation method bond parameters are fit concurrently to moments of inertia of various isotopologues and to theoretical bond parameters, each data set carrying appropriate uncertainties. The accuracy of this structure is 0.001 Å and 0.1°. Structures of similar accuracy have been computed for the cis,cis, trans,trans, and cis,trans isomers of octatetraene at the CCSD(T) level with a basis set of wCVQZ(ae) quality adjusted in accord with the experience gained with trans-hexatriene. The structures are compared with butadiene and with cis-hexatriene to show how increasing the length of the chain in polyenes leads to increased blurring of the difference between single and double bonds in the carbon chain. In trans-hexatriene r(“C_1=C_2") = 1.339 Å and r(“C_3=C_4") = 1.346 Å compared to 1.338 Å for the “double" bond in butadiene; r(“C_2-C_3") = 1.449 Å compared to 1.454 Å for the “single" bond in butadiene. “Double" bonds increase in length; “single" bonds decrease in length.
Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz
2009-09-01
The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.
Programmatic methods for addressing contaminated volume uncertainties.
DURHAM, L.A.; JOHNSON, R.L.; RIEMAN, C.R.; SPECTOR, H.L.; Environmental Science Division; U.S. ARMY CORPS OF ENGINEERS BUFFALO DISTRICT
2007-01-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the preremedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in predesign data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in predesign characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland1, Ashland2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate predesign contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District.
Bakhtiari, M; Schmitt, J; Sarfaraz, M; Osik, C
2015-06-15
Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varying from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.
Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.
2012-04-01
Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models
NASA Astrophysics Data System (ADS)
Takagi, Hideo D.; Swaddle, Thomas W.
1996-01-01
The outer-sphere contribution to the volume of activation of homogeneous electron exchange reactions is estimated for selected solvents on the basis of the mean spherical approximation (MSA), and the calculated values are compared with those estimated by the Strank-Hush-Marcus (SHM) theory and with activation volumes obtained experimentally for the electron exchange reaction between tris(hexafluoroacetylacetonato)ruthenium(III) and -(II) in acetone, acetonitrile, methanol and chloroform. The MSA treatment, which recognizes the molecular nature of the solvent, does not improve significantly upon the continuous-dielectric SHM theory, which represents the experimental data adequately for the more polar solvents.
NASA Astrophysics Data System (ADS)
Yu, Ting-To
2013-04-01
It is important to acquire the volume of landslide in short period of time. For hazard mitigation and also emergency response purpose, the traditional method takes much longer time than expected. Due to the weather limit, traffic accessibility and many regulations of law, it take months to handle these process before the actual carry out of filed work. Remote sensing imagery can get the data as long as the visibility allowed, which happened only few day after the event. While traditional photometry requires a stereo pairs images to produce the post event DEM for calculating the change of volume. Usually have to wait weeks or even months for gathering such data, LiDAR or ground GPS measurement might take even longer period of time with much higher cost. In this study we use one post event satellite image and pre-event DTM to compare the similarity between these by alter the DTM with genetic algorithms. The outcome of smartest guess from GAs shall remove or add exact values of height at each location, which been converted into shadow relief viewgraph to compare with satellite image. Once the similarity threshold been make then the guessing work stop. It takes only few hours to finish the entire task, the computed accuracy is around 70% by comparing to the high resolution LiDAR survey at a landslide, southern Taiwan. With extra GCPs, the estimate accuracy can improve to 85% and also within few hours after the receiving of satellite image. Data of this demonstration case is a 5 m DTM at 2005, 2M resolution FormoSat optical image at 2009 and 5M LiDAR at 2010. The GAs and image similarity code is developed on Matlab at windows PC.
First tomographic estimate of volume distribution of HF-pump enhanced airglow emission
NASA Astrophysics Data System (ADS)
Gustavsson, B.; Sergienko, T.; Rietveld, M. T.; Honary, F.; Steen, Å.; Brändström, B. U. E.; Leyser, T. B.; Aruliah, A. L.; Aso, T.; Ejiri, M.; Marple, S.
2001-12-01
This report presents the first estimates of the three-dimensional volume emission rate of enhanced O(1D) 6300 Å airglow caused by HF radio wave pumping in the ionosphere. Images of the excitation show how the initially speckled spatial structure of excitation changes to a simpler shape with a smaller region that contains most of the excitation. A region of enhanced airglow was imaged by three stations in the Auroral Large Imaging System (ALIS) in northern Scandinavia. These images allowed for a tomography-like inversion of the volume emission of the airglow. The altitude of maximum emission was found to be around 235+/-5km with typical horizontal and vertical scale sizes of 20 km. The shape of the O(1D) excitation rate varied from flatish to elongated along the magnetic field. The altitude of maximum emission is found to be approximately 10 km below the altitude of the enhanced ion line and 15 km above the altitude of maximum electron temperature. Comparisons of the measured altitude and temporal variations of the 6300 Å emission with modelled emission caused by O(1D) excitation from the high energy tail of a Maxwellian electron distribution show significant deviations. The 6300 Å emission from excitation of the high energy tail is about a factor of 4 too large compared with what is observed. This shows that the source of O(1D) excitation is electrons from a ``sub-thermal'' distribution function, i.e. the electron distribution is Maxwellian at low energies and at energies above 1.96 eV there is a depletion.
Estimating Plume Volume for Geologic Storage of CO2 in Saline Aquifers
Doughty, Christine
2008-07-11
Typically, when a new subsurface flow and transport problem is first being considered, very simple models with a minimal number of parameters are used to get a rough idea of how the system will evolve. For a hydrogeologist considering the spreading of a contaminant plume in an aquifer, the aquifer thickness, porosity, and permeability might be enough to get started. If the plume is buoyant, aquifer dip comes into play. If regional groundwater flow is significant or there are nearby wells pumping, these features need to be included. Generally, the required parameters tend to be known from pre-existing studies, are parameters that people working in the field are familiar with, and represent features that are easy to explain to potential funding agencies, regulators, stakeholders, and the public. The situation for geologic storage of carbon dioxide (CO{sub 2}) in saline aquifers is quite different. It is certainly desirable to do preliminary modeling in advance of any field work since geologic storage of CO{sub 2} is a novel concept that few people have much experience with or intuition about. But the parameters that control CO{sub 2} plume behavior are a little more daunting to assemble and explain than those for a groundwater flow problem. Even the most basic question of how much volume a given mass of injected CO{sub 2} will occupy in the subsurface is non-trivial. However, with a number of simplifying assumptions, some preliminary estimates can be made, as described below. To make efficient use of the subsurface storage volume available, CO{sub 2} density should be large, which means choosing a storage formation at depths below about 800 m, where pressure and temperature conditions are above the critical point of CO{sub 2} (P = 73.8 bars, T = 31 C). Then CO{sub 2} will exist primarily as a free-phase supercritical fluid, while some CO{sub 2} will dissolve into the aqueous phase.
Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques
2012-04-01
Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared.
Partial volume effect estimation and correction in the aortic vascular wall in PET imaging
NASA Astrophysics Data System (ADS)
Burg, S.; Dupas, A.; Stute, S.; Dieudonné, A.; Huet, P.; Le Guludec, D.; Buvat, I.
2013-11-01
We evaluated the impact of partial volume effect (PVE) in the assessment of arterial diseases with 18FDG PET. An anthropomorphic digital phantom enabling the modeling of aorta related diseases like atherosclerosis and arteritis was used. Based on this phantom, we performed GATE Monte Carlo simulations to produce realistic PET images with a known organ segmentation and ground truth activity values. Images corresponding to 15 different activity-concentration ratios between the aortic wall and the blood and to 7 different wall thicknesses were generated. Using the PET images, we compared the theoretical wall-to-blood activity-concentration ratios (WBRs) with the measured WBRs obtained with five measurement methods: (1) measurement made by a physician (Expert), (2) automated measurement supposed to mimic the physician measurements (Max), (3) simple correction based on a recovery coefficient (Max-RC), (4) measurement based on an ideal VOI segmentation (Mean-VOI) and (5) measurement corrected for PVE using an ideal geometric transfer matrix (GTM) method. We found that Mean-VOI WBRs values were strongly affected by PVE. WBRs obtained by the physician measurement, by the Max method and by the Max-RC method were more accurate than WBRs obtained with the Mean-VOI approach. However Expert, Max and Max-RC WBRs strongly depended on the wall thickness. Only the GTM corrected WBRs did not depend on the wall thickness. Using the GTM method, we obtained more reproducible ratio values that could be compared across wall thickness. Yet, the feasibility of the implementation of a GTM-like method on real data remains to be studied.
Butlin, Mark; Qasem, Ahmad; Avolio, Alberto P
2012-01-01
There is increasing interest in non-invasive estimation of central aortic waveform parameters in the clinical setting. However, controversy has arisen around radial tonometric based systems due to the requirement of a trained operator or lack of ease of use, especially in the clinical environment. A recently developed device utilizes a novel algorithm for brachial cuff based assessment of aortic pressure values and waveform (SphygmoCor XCEL, AtCor Medical). The cuff was inflated to 10 mmHg below an individual's diastolic blood pressure and the brachial volume displacement waveform recorded. The aortic waveform was derived using proprietary digital signal processing and transfer function applied to the recorded waveform. The aortic waveform was also estimated using a validated technique (radial tonometry based assessment, SphygmoCor, AtCor Medical). Measurements were taken in triplicate with each device in 30 people (17 female) aged 22 to 79 years of age. An average for each device for each individual was calculated, and the results from the two devices were compared using regression and Bland-Altman analysis. A high correlation was found between the devices for measures of aortic systolic (R(2)=0.99) and diastolic (R(2)=0.98) pressure. Augmentation index and subendocardial viability ratio both had a between device R(2) value of 0.82. The difference between devices for measured aortic systolic pressure was 0.5±1.8 mmHg, and for augmentation index, 1.8±7.0%. The brachial cuff based approach, with an individualized sub-diastolic cuff pressure, provides an operator independent method of assessing not only systolic pressure, but also aortic waveform features, comparable to existing validated tonometric-based methods.
Pilot Study: Estimation of Stroke Volume and Cardiac Output from Pulse Wave Velocity
Nyhan, Daniel; Berkowitz, Dan E.; Steppan, Jochen; Barodka, Viachaslau
2017-01-01
Background Transesophageal echocardiography (TEE) is increasingly replacing thermodilution pulmonary artery catheters to assess hemodynamics in patients at high risk for cardiovascular morbidity. However, one of the drawbacks of TEE compared to pulmonary artery catheters is the inability to measure real time stroke volume (SV) and cardiac output (CO) continuously. The aim of the present proof of concept study was to validate a novel method of SV estimation, based on pulse wave velocity (PWV) in patients undergoing cardiac surgery. Methods This is a retrospective observational study. We measured pulse transit time by superimposing the radial arterial waveform onto the continuous wave Doppler waveform of the left ventricular outflow tract, and calculated SV (SVPWV) using the transformed Bramwell-Hill equation. The SV measured by TEE (SVTEE) was used as a reference. Results A total of 190 paired SV were measured from 28 patients. A strong correlation was observed between SVPWV and SVTEE with the coefficient of determination (R2) of 0.71. A mean difference between the two (bias) was 3.70 ml with the limits of agreement ranging from -20.33 to 27.73 ml and a percentage error of 27.4% based on a Bland-Altman analysis. The concordance rate of two methods was 85.0% based on a four-quadrant plot. The angular concordance rate was 85.9% with radial limits of agreement (the radial sector that contained 95% of the data points) of ± 41.5 degrees based on a polar plot. Conclusions PWV based SV estimation yields reasonable agreement with SV measured by TEE. Further studies are required to assess its utility in different clinical situations. PMID:28060961
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.
Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H
2016-03-18
A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera
Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.
2016-01-01
A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
1995-09-01
The Solid Waste Retrieval Facility--Phase 1 (Project W113) will provide the infrastructure and the facility required to retrieve from Trench 04, Burial ground 4C, contact handled (CH) drums and boxes at a rate that supports all retrieved TRU waste batching, treatment, storage, and disposal plans. This includes (1) operations related equipment and facilities, viz., a weather enclosure for the trench, retrieval equipment, weighing, venting, obtaining gas samples, overpacking, NDE, NDA, shipment of waste and (2) operations support related facilities, viz., a general office building, a retrieval staff change facility, and infrastructure upgrades such as supply and routing of water, sewer, electrical power, fire protection, roads, and telecommunication. Title I design for the operations related equipment and facilities was performed by Raytheon/BNFL, and that for the operations support related facilities including infrastructure upgrade was performed by KEH. These two scopes were combined into an integrated W113 Title II scope that was performed by Raytheon/BNFL. This volume represents the total estimated costs for the W113 facility. Operating Contractor Management costs have been incorporated as received from WHC. The W113 Facility TEC is $19.7 million. This includes an overall project contingency of 14.4% and escalation of 17.4%. A January 2001 construction contract procurement start date is assumed.
Estimation of thickness of concentration boundary layers by osmotic volume flux determination.
Jasik-Ślęzak, Jolanta S; Olszówka, Kornelia M; Slęzak, Andrzej
2011-06-01
The estimation method of the concentration boundary layers thicknesses (δ) in a single-membrane system containing non-electrolytic binary or ternary solutions was devised using the Kedem-Katchalsky formalism. A square equation used in this method contains membrane transport (L(p), σ, ω) and solution (D, C) parameters as well as a volume osmotic flux (J(v)). These values can be determined in a series of independent experiments. Calculated values δ are nonlinearly dependent on the concentrations of investigated solutions and the membrane system configuration. These nonlinearities are the effect of a competition between spontaneously occurring diffusion and natural convection. The mathematical model based on Kedem-Katchalsky equations and a concentration Rayleigh number (R(C)) was presented. On the basis of this model we introduce the dimensionless parameter, called by us a Katchalsky number (Ka), modifies R(C) of membrane transport. The critical value of this number well describes a moment of transition from the state of diffusion into convective diffusion membrane transport.
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M.
2015-01-01
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system. PMID:25902504
Predicting traffic volumes and estimating the effects of shocks in massive transportation systems.
Silva, Ricardo; Kang, Soong Moon; Airoldi, Edoardo M
2015-05-05
Public transportation systems are an essential component of major cities. The widespread use of smart cards for automated fare collection in these systems offers a unique opportunity to understand passenger behavior at a massive scale. In this study, we use network-wide data obtained from smart cards in the London transport system to predict future traffic volumes, and to estimate the effects of disruptions due to unplanned closures of stations or lines. Disruptions, or shocks, force passengers to make different decisions concerning which stations to enter or exit. We describe how these changes in passenger behavior lead to possible overcrowding and model how stations will be affected by given disruptions. This information can then be used to mitigate the effects of these shocks because transport authorities may prepare in advance alternative solutions such as additional buses near the most affected stations. We describe statistical methods that leverage the large amount of smart-card data collected under the natural state of the system, where no shocks take place, as variables that are indicative of behavior under disruptions. We find that features extracted from the natural regime data can be successfully exploited to describe different disruption regimes, and that our framework can be used as a general tool for any similar complex transportation system.
Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions
NASA Astrophysics Data System (ADS)
White, Randall; McCausland, Wendy
2016-01-01
We present data on 136 high-frequency earthquakes and swarms, termed volcano-tectonic (VT) seismicity, which preceded 111 eruptions at 83 volcanoes, plus data on VT swarms that preceded intrusions at 21 other volcanoes. We find that VT seismicity is usually the earliest reported seismic precursor for eruptions at volcanoes that have been dormant for decades or more, and precedes eruptions of all magma types from basaltic to rhyolitic and all explosivities from VEI 0 to ultraplinian VEI 6 at such previously long-dormant volcanoes. Because large eruptions occur most commonly during resumption of activity at long-dormant volcanoes, VT seismicity is an important precursor for the Earth's most dangerous eruptions. VT seismicity precedes all explosive eruptions of VEI ≥ 5 and most if not all VEI 4 eruptions in our data set. Surprisingly we find that the VT seismicity originates at distal locations on tectonic fault structures at distances of one or two to tens of kilometers laterally from the site of the eventual eruption, and rarely if ever starts beneath the eruption site itself. The distal VT swarms generally occur at depths almost equal to the horizontal distance of the swarm from the summit out to about 15 km distance, beyond which hypocenter depths level out. We summarize several important characteristics of this distal VT seismicity including: swarm-like nature, onset days to years prior to the beginning of magmatic eruptions, peaking of activity at the time of the initial eruption whether phreatic or magmatic, and large non-double couple component to focal mechanisms. Most importantly we show that the intruded magma volume can be simply estimated from the cumulative seismic moment of the VT seismicity from: Log10 V = 0.77 Log ΣMoment - 5.32, with volume, V, in cubic meters and seismic moment in Newton meters. Because the cumulative seismic moment can be approximated from the size of just the few largest events, and is quite insensitive to precise locations
NASA Technical Reports Server (NTRS)
McCurry, J. B.
1995-01-01
The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.
Sari, Hasan; Erlandsson, Kjell; Law, Ian; Larsson, Henrik Bw; Ourselin, Sebastien; Arridge, Simon; Atkinson, David; Hutton, Brian F
2017-04-01
Kinetic analysis of (18)F-fluorodeoxyglucose positron emission tomography data requires an accurate knowledge the arterial input function. The gold standard method to measure the arterial input function requires collection of arterial blood samples and is an invasive method. Measuring an image derived input function is a non-invasive alternative but is challenging due to partial volume effects caused by the limited spatial resolution of the positron emission tomography scanners. In this work, a practical image derived input function extraction method is presented, which only requires segmentation of the carotid arteries from MR images. The simulation study results showed that at least 92% of the true intensity could be recovered after the partial volume correction. Results from 19 subjects showed that the mean cerebral metabolic rate of glucose calculated using arterial samples and partial volume corrected image derived input function were 26.9 and 25.4 mg/min/100 g, respectively, for the grey matter and 7.2 and 6.7 mg/min/100 g for the white matter. No significant difference in the estimated cerebral metabolic rate of glucose values was observed between arterial samples and corrected image derived input function (p > 0.12 for grey matter and white matter). Hence, the presented image derived input function extraction method can be a practical alternative to noninvasively analyze dynamic (18)F-fluorodeoxyglucose data without the need for blood sampling.
Voxel-Based Approach for Estimating Urban Tree Volume from Terrestrial Laser Scanning Data
NASA Astrophysics Data System (ADS)
Vonderach, C.; Voegtle, T.; Adler, P.
2012-07-01
The importance of single trees and the determination of related parameters has been recognized in recent years, e.g. for forest inventories or management. For urban areas an increasing interest in the data acquisition of trees can be observed concerning aspects like urban climate, CO2 balance, and environmental protection. Urban trees differ significantly from natural systems with regard to the site conditions (e.g. technogenic soils, contaminants, lower groundwater level, regular disturbance), climate (increased temperature, reduced humidity) and species composition and arrangement (habitus and health status) and therefore allometric relations cannot be transferred from natural sites to urban areas. To overcome this problem an extended approach was developed for a fast and non-destructive extraction of branch volume, DBH (diameter at breast height) and height of single trees from point clouds of terrestrial laser scanning (TLS). For data acquisition, the trees were scanned with highest scan resolution from several (up to five) positions located around the tree. The resulting point clouds (20 to 60 million points) are analysed with an algorithm based on voxel (volume elements) structure, leading to an appropriate data reduction. In a first step, two kinds of noise reduction are carried out: the elimination of isolated voxels as well as voxels with marginal point density. To obtain correct volume estimates, the voxels inside the stem and branches (interior voxels) where voxels contain no laser points must be regarded. For this filling process, an easy and robust approach was developed based on a layer-wise (horizontal layers of the voxel structure) intersection of four orthogonal viewing directions. However, this procedure also generates several erroneous "phantom" voxels, which have to be eliminated. For this purpose the previous approach was extended by a special region growing algorithm. In a final step the volume is determined layer-wise based on the extracted
ERIC Educational Resources Information Center
Weiner, Neil S.; And Others
The conceptual framework behind the model of aggregate U.S. Employment Service (ES) productivity is described in this report (and the companion volume of appendixes) along with the illustrative estimates of ES productivity using the model. Chapter 1 introduces the question of productivity measurement in a social purpose. Chapter 2 contains a…
2012-01-01
Background The paper presents a newly researched acoustic system for blood volume measurements for the developed family of Polish ventricular assist devices. The pneumatic heart-supporting devices are still the preferred solution in some cases, and monitoring of their operation, especially the temporary blood volume, is yet to be solved. Methods The prototype of the POLVAD-EXT prosthesis developed by the Foundation of Cardiac Surgery Development, Zabrze, Poland, is equipped with the newly researched acoustic blood volume measurement system based on the principle of Helmholtz’s acoustic resonance. The results of static volume measurements acquired using the acoustic sensor were verified by measuring the volume of the liquid filling the prosthesis. Dynamic measurements were conducted on the hybrid model of the human cardiovascular system at the Foundation, with the Transonic T410 (11PLX transducer - 5% uncertainty) ultrasound flow rate sensor, used as the reference. Results The statistical analysis of a series of static tests have proved that the sensor solution provides blood volume measurement results with uncertainties (understood as a standard mean deviation) of less than 10%. Dynamic tests show a high correlation between the results of the acoustic system and those obtained by flow rate measurements using an ultrasound transit time type sensor. Conclusions The results show that noninvasive, online temporary blood volume measurements in the POLVAD-EXT prosthesis, making use of the newly developed acoustic system, provides accurate static and dynamic measurements results. Conducted research provides the preliminary view on the possibility of reducing the additional sensor chamber volume in future. PMID:22998766
Sperduto, Paul W.; Kased, Norbert; Roberge, David; Xu, Zhiyuan; Shanley, Ryan; Luo, Xianghua; Sneed, Penny K.; Chao, Samuel T.; Weil, Robert J.; Suh, John; Bhatt, Amit; Jensen, Ashley W.; Brown, Paul D.; Shih, Helen A.; Kirkpatrick, John; Gaspar, Laurie E.; Fiveash, John B.; Chiang, Veronica; Knisely, Jonathan P.S.; Sperduto, Christina Maria; Lin, Nancy; Mehta, Minesh
2012-01-01
Purpose Our group has previously published the Graded Prognostic Assessment (GPA), a prognostic index for patients with brain metastases. Updates have been published with refinements to create diagnosis-specific Graded Prognostic Assessment indices. The purpose of this report is to present the updated diagnosis-specific GPA indices in a single, unified, user-friendly report to allow ease of access and use by treating physicians. Methods A multi-institutional retrospective (1985 to 2007) database of 3,940 patients with newly diagnosed brain metastases underwent univariate and multivariate analyses of prognostic factors associated with outcomes by primary site and treatment. Significant prognostic factors were used to define the diagnosis-specific GPA prognostic indices. A GPA of 4.0 correlates with the best prognosis, whereas a GPA of 0.0 corresponds with the worst prognosis. Results Significant prognostic factors varied by diagnosis. For lung cancer, prognostic factors were Karnofsky performance score, age, presence of extracranial metastases, and number of brain metastases, confirming the original Lung-GPA. For melanoma and renal cell cancer, prognostic factors were Karnofsky performance score and the number of brain metastases. For breast cancer, prognostic factors were tumor subtype, Karnofsky performance score, and age. For GI cancer, the only prognostic factor was the Karnofsky performance score. The median survival times by GPA score and diagnosis were determined. Conclusion Prognostic factors for patients with brain metastases vary by diagnosis, and for each diagnosis, a robust separation into different GPA scores was discerned, implying considerable heterogeneity in outcome, even within a single tumor type. In summary, these indices and related worksheet provide an accurate and facile diagnosis-specific tool to estimate survival, potentially select appropriate treatment, and stratify clinical trials for patients with brain metastases. PMID:22203767
[Estimation of VOC emission from forests in China based on the volume of tree species].
Zhang, Gang-feng; Xie, Shao-dong
2009-10-15
Applying the volume data of dominant trees from statistics on the national forest resources, volatile organic compounds (VOC) emissions of each main tree species in China were estimated based on the light-temperature model put forward by Guenther. China's VOC emission inventory for forest was established, and the space-time and age-class distributions of VOC emission were analyzed. The results show that the total VOC emissions from forests in China are 8565.76 Gg, of which isoprene is 5689.38 Gg (66.42%), monoterpenes is 1343.95 Gg (15.69%), and other VOC is 1532.43 Gg (17.89%). VOC emissions have significant species variation. Quercus is the main species responsible for emission, contributing 45.22% of the total, followed by Picea and Pinus massoniana with 6.34% and 5.22%, respectively. Southwest and Northeast China are the major emission regions. In specific, Yunnan, Sichuan, Heilongjiang, Jilin and Shaanxi are the top five provinces producing the most VOC emissions from forests, and their contributions to the total are 15.09%, 12.58%, 10.35%, 7.49% and 7.37%, respectively. Emissions from these five provinces occupy more than half (52.88%) of the national emissions. Besides, VOC emissions show remarkable seasonal variation. Emissions in summer are the largest, accounting for 56.66% of the annual. Forests of different ages have different emission contribution. Half-mature forests play a key role and contribute 38.84% of the total emission from forests.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Hulse, R.A.
1991-08-01
Planning for storage or disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of that waste to estimate volumes, radionuclide activities, and waste forms. Data from existing literature, disposal records, and original research were used to estimate the characteristics and project volumes and radionuclide activities to the year 2035. GTCC LLW is categorized as: nuclear utilities waste, sealed sources waste, DOE-held potential GTCC LLW; and, other generator waste. It has been determined that the largest volume of those wastes, approximately 57%, is generated by nuclear power plants. The Other Generator waste category contributes approximately 10% of the total GTCC LLW volume projected to the year 2035. Waste held by the Department of Energy, which is potential GTCC LLW, accounts for nearly 33% of all waste projected to the year 2035; however, no disposal determination has been made for that waste. Sealed sources are less than 0.2% of the total projected volume of GTCC LLW.
NASA Astrophysics Data System (ADS)
Li, Qin; Gavrielides, Marios A.; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2015-03-01
This work aimed to compare two different types of volume estimation methods (a model-based and a segmentationbased method) in terms of identifying factors affecting measurement uncertainty. Twenty-nine synthetic nodules with varying size, radiodensity, and shape were placed in an anthropomorphic thoracic phantom and scanned with a 16- detector row CT scanner. Ten repeat scans were acquired using three exposures and two slice collimations, and were reconstructed with varying slice thicknesses. Nodule volumes were estimated from the reconstructed data using a matched-filter and a segmentation approach. Log transformed volumes were used to obtain measurement error with truth obtained through micro-CT. ANOVA and multiple linear regression were applied to measurement error to identify significant factors affecting volume estimation for each method. Root mean square of measurement errors (RMSE) for meaningful subgroups, repeatability coefficients (RC) for different imaging protocols, and reproducibility coefficients (RDC) for thin and thick collimation conditions were evaluated. Results showed that for both methods, nodule size, shape and slice thickness were significant factors. Collimation was significant for the matched-filter method. RMSEs for matched-filter measurements were in general smaller than segmentation. To achieve RMSE on the order of 15% or less for {5, 8, 9, 10mm} nodules, the corresponding maximum allowable slice thicknesses were {3, 5, 5, 5mm} for the matched-filter and {0.8, 3, 3, 3mm} for the segmentation method. RCs showed similar patterns for both methods, increasing with slice thickness. For 8-10mm nodules, the measurements were highly repeatable provided the slice thickness was ≤3mm, regardless of method and across varying acquisition conditions. RDCs were lower for thin collimation than thick collimation protocols. While RDC of matched filter volume estimation results was always lower than segmentation results, for 8-10mm nodules with thin
NASA Astrophysics Data System (ADS)
Cook, Geoffrey W.; Wolff, John A.; Self, Stephen
2016-02-01
The 1.60 Ma caldera-forming eruption of the Otowi Member of the Bandelier Tuff produced Plinian and coignimbrite fall deposits, outflow and intracaldera ignimbrite, all of it deposited on land. We present a detailed approach to estimating and reconstructing the original volume of the eroded, partly buried large ignimbrite and distal ash-fall deposits. Dense rock equivalent (DRE) volume estimates for the eruption are 89 + 33/-10 km3 of outflow ignimbrite and 144 ± 72 km3 of intracaldera ignimbrite. Also, there was at least 65 km3 (DRE) of Plinian fall when extrapolated distally, and 107 + 40/-12 km3 of coignimbrite ash was "lost" from the outflow sheet to form an unknown proportion of the distal ash fall. The minimum total volume is 216 km3 and the maximum is 550 km3; hence, the eruption overlaps the low end of the super-eruption spectrum (VEI ˜8.0). Despite an abundance of geological data for the Otowi Member, the errors attached to these estimates do not allow us to constrain the proportions of intracaldera (IC), outflow (O), and distal ash (A) to better than a factor of three. We advocate caution in applying the IC/O/A = 1:1:1 relation of Mason et al. (2004) to scaling up mapped volumes of imperfectly preserved caldera-forming ignimbrites.
Boligon, A A; Silva, J A V; Sesana, R C; Sesana, J C; Junqueira, J B; Albuquerque, L G
2010-04-01
Data from 129,575 Nellore cattle born between 1993 and 2006, belonging to the Jacarezinho cattle-raising farm, were used to estimate genetic parameters for scrotal circumference measured at 9 (SC9), 12 (SC12), and 18 (SC18) mo of age and testicular volume measured at the same ages (TV9, TV12, and TV18) and to determine their correlation with weaning weight (WW) and yearling weight (YW), to provide information for the definition of selection criteria in beef cattle. Estimates of (co)variance components were calculated by the REML method applying an animal model in single- and multiple-trait analysis. The following heritability estimates and their respective SE were obtained for WW, YW, SC9, SC12, SC18, TV9, TV12, and TV18: 0.33 +/- 0.02, 0.37 +/- 0.03, 0.29 +/- 0.03, 0.39 +/- 0.04, 0.42 +/- 0.03, 0.19 +/- 0.04, 0.26 +/- 0.05, and 0.39 +/- 0.04, respectively. The genetic correlation between WW and YW was positive and high (0.80 +/- 0.04), indicating that these traits are mainly determined by the same genes. Genetic correlations between the growth traits and scrotal circumference measures were positive and of low to moderate magnitude, ranging from 0.23 +/- 0.04 to 0.38 +/- 0.04. On the other hand, increased genetic associations were estimated between scrotal circumference and testicular volume at different ages (0.61 +/- 0.04 to 0.86 +/- 0.04). Selection for greater scrotal circumference in males should result in greater WW, YW, and testicular volume. In conclusion, in view of the difficulty in measuring testicular volume, there is no need to change the selection criterion from scrotal circumference to testicular volume in genetic breeding programs of Zebu breeds.
Not Available
1994-09-01
The Department of Energy`s (DOE`s) planning for the disposal of greater-than-Class C low-level radioactive waste (GTCC LLW) requires characterization of the waste. This report estimates volumes, radionuclide activities, and waste forms of GTCC LLW to the year 2035. It groups the waste into four categories, representative of the type of generator or holder of the waste: Nuclear Utilities, Sealed Sources, DOE-Held, and Other Generator. GTCC LLW includes activated metals (activation hardware from reactor operation and decommissioning), process wastes (i.e., resins, filters, etc.), sealed sources, and other wastes routinely generated by users of radioactive material. Estimates reflect the possible effect that packaging and concentration averaging may have on the total volume of GTCC LLW. Possible GTCC mixed LLW is also addressed. Nuclear utilities will probably generate the largest future volume of GTCC LLW with 65--83% of the total volume. The other generators will generate 17--23% of the waste volume, while GTCC sealed sources are expected to contribute 1--12%. A legal review of DOE`s obligations indicates that the current DOE-Held wastes described in this report will not require management as GTCC LLW because of the contractual circumstances under which they were accepted for storage. This report concludes that the volume of GTCC LLW should not pose a significant management problem from a scientific or technical standpoint. The projected volume is small enough to indicate that a dedicated GTCC LLW disposal facility may not be justified. Instead, co-disposal with other waste types is being considered as an option.
3D ultrasound estimation of the effective volume for popliteal block at the level of division.
Sala-Blanch, X; Franco, J; Bergé, R; Marín, R; López, A M; Agustí, M
2017-03-01
Local anaesthetic injection between the tibial and commmon peroneal nerves within connective tissue sheath results in a predictable diffusion and allows for a reduction in the volume needed to achieve a consistent sciatic popliteal block. Using 3D ultrasound volumetric acquisition, we quantified the visible volume in contact with the nerve along a 5cm segment.
Rickbeil, Gregory J M; Hermosilla, Txomin; Coops, Nicholas C; White, Joanne C; Wulder, Michael A
2017-01-01
Lichens form a critical portion of barren ground caribou (Rangifer tarandus groenlandicus) diets, especially during winter months. Here, we assess lichen mat volume across five herd ranges in the Northwest Territories and Nunavut, Canada, using newly developed composite Landsat imagery. The lichen volume estimator (LVE) was adapted for use across 700 000 km2 of barren ground caribou habitat annually from 1984-2012. We subsequently assessed how LVE changed temporally throughout the time series for each pixel using Theil-Sen's slopes, and spatially by assessing whether slope values were centered in local clusters of similar values. Additionally, we assessed how LVE estimates resulted in changes in barren ground caribou movement rates using an extensive telemetry data set from 2006-2011. The Ahiak/Beverly herd had the largest overall increase in LVE (median = 0.033), while the more western herds had the least (median slopes below zero in all cases). LVE slope pixels were arranged in significant clusters across the study area, with the Cape Bathurst, Bathurst, and Bluenose East herds having the most significant clusters of negative slopes (more than 20% of vegetated land in each case). The Ahiak/Beverly and Bluenose West had the most significant positive clusters (16.3% and 18.5% of vegetated land respectively). Barren ground caribou displayed complex reactions to changing lichen conditions depending on season; the majority of detected associations with movement data agreed with current understanding of barren ground caribou foraging behavior (the exception was an increase in movement velocity at high lichen volume estimates in Fall). The temporal assessment of LVE identified areas where shifts in ecological conditions may have resulted in changing lichen mat conditions, while assessing the slope estimates for clustering identified zones beyond the pixel scale where forage conditions may be changing. Lichen volume estimates associated with barren ground caribou
Hermosilla, Txomin; Coops, Nicholas C.; White, Joanne C.; Wulder, Michael A.
2017-01-01
Lichens form a critical portion of barren ground caribou (Rangifer tarandus groenlandicus) diets, especially during winter months. Here, we assess lichen mat volume across five herd ranges in the Northwest Territories and Nunavut, Canada, using newly developed composite Landsat imagery. The lichen volume estimator (LVE) was adapted for use across 700 000 km2 of barren ground caribou habitat annually from 1984–2012. We subsequently assessed how LVE changed temporally throughout the time series for each pixel using Theil-Sen’s slopes, and spatially by assessing whether slope values were centered in local clusters of similar values. Additionally, we assessed how LVE estimates resulted in changes in barren ground caribou movement rates using an extensive telemetry data set from 2006–2011. The Ahiak/Beverly herd had the largest overall increase in LVE (median = 0.033), while the more western herds had the least (median slopes below zero in all cases). LVE slope pixels were arranged in significant clusters across the study area, with the Cape Bathurst, Bathurst, and Bluenose East herds having the most significant clusters of negative slopes (more than 20% of vegetated land in each case). The Ahiak/Beverly and Bluenose West had the most significant positive clusters (16.3% and 18.5% of vegetated land respectively). Barren ground caribou displayed complex reactions to changing lichen conditions depending on season; the majority of detected associations with movement data agreed with current understanding of barren ground caribou foraging behavior (the exception was an increase in movement velocity at high lichen volume estimates in Fall). The temporal assessment of LVE identified areas where shifts in ecological conditions may have resulted in changing lichen mat conditions, while assessing the slope estimates for clustering identified zones beyond the pixel scale where forage conditions may be changing. Lichen volume estimates associated with barren ground caribou
NASA Technical Reports Server (NTRS)
1990-01-01
Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...
2017-06-01
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
NASA Astrophysics Data System (ADS)
Wu, N. L.; Campbell, S. W.; Douglas, T. A.; Osterberg, E. C.
2013-12-01
Jarvis Glacier is an important water source for Fort Greely and Delta Junction, Alaska. Yet with warming summer temperatures caused by climate change, the glacier is melting rapidly. Growing concern of a dwindling water supply has caused significant research efforts towards determining future water resources from spring melt and glacier runoff which feeds the community on a yearly basis. The main objective of this project was to determine the total volume of the Jarvis Glacier. In April 2012, a centerline profile of the Jarvis Glacier and 15 km of 100 MHz ground-penetrating radar (GPR) profiles were collected in cross sections to provide ice depth measurements. These depth measurements were combined with an interpreted glacier boundary (depth = 0 m) from recently collected high resolution WorldView satellite imagery to estimate total ice volume. Ice volume was calculated at 0.62 km3 over a surface area of 8.82 km2. However, it is likely that more glacier-ice exists within Jarvis Glacier watershed considering the value calculated with GPR profiles accounts for only the glacier ice within the valley and not for the valley side wall ice. The GLIMS glacier area database suggests that the valley accounts for approximately 50% of the total ice covered watershed. Hence, we are currently working to improve total ice volume estimates which incorporate the surrounding valley walls. Results from this project will be used in conjunction with climate change estimates and hydrological properties downstream of the glacier to estimate future water resources available to Fort Greely and Delta Junction.
NASA Astrophysics Data System (ADS)
Berx, B.; Hansen, B.; Østerhus, S.; Larsen, K. M.; Sherwin, T.; Jochumsen, K.
2013-01-01
From 1994 to 2011, instruments measuring ocean currents (ADCPs) have been moored on a section crossing the Faroe-Shetland Channel. Together with CTD (Conductivity Temperature Depth) measurements from regular research vessel occupations, they describe the flow field and water mass structure in the channel. Here, we use these data to calculate the average volume transport and properties of the flow of warm water through the channel from the Atlantic towards the Arctic, termed the Atlantic inflow. We find the average volume transport of this flow to be 2.7 ± 0.5 Sv (1 Sv = 106 m3 s-1) between the shelf edge on the Faroe side and the 150 m isobath on the Shetland side. The average heat transport (relative to 0 °C) was estimated to be 107 ± 21 TW and the average salt import to be 98 ± 20 × 106 kg s-1. Transport values for individual months, based on the ADCP data, include a large level of variability, but can be used to calibrate sea level height data from satellite altimetry. In this way, a time series of volume transport has been generated back to the beginning of satellite altimetry in December 1992. The Atlantic inflow has a seasonal variation in volume transport that peaks around the turn of the year and has an amplitude of 0.7 Sv. The Atlantic inflow has become warmer and more saline since 1994, but no equivalent trend in volume transport was observed.
NASA Astrophysics Data System (ADS)
Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas
2013-03-01
Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.
NASA Astrophysics Data System (ADS)
Rebello, N. Sanjay
2012-02-01
Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.
Båth, Magnus Svalkvist, Angelica; Söderman, Christina
2014-10-15
Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.
NASA Astrophysics Data System (ADS)
Wang, Ting-Shiuan; Yu, Teng-To; Lee, Shing-Tsz; Peng, Wen-Fei; Lin, Wei-Ling; Li, Pei-Ling
2014-09-01
Information regarding the scale of a hazard is crucial for the evaluation of its associated impact. Quantitative analysis of landslide volume immediately following the event can offer better understanding and control of contributory factors and their relative importance. Such information cannot be gathered for each landslide event, owing to limitations in obtaining useable raw data and the necessary procedures of each applied technology. Empirical rules are often used to predict volume change, but the resulting accuracy is very low. Traditional methods use photogrammetry or light detection and ranging (LiDAR) to produce a post-event digital terrain model (DTM). These methods are both costly and time-intensive. This study presents a technique to estimate terrain change volumes quickly and easily, not only reducing waiting time but also offering results with less than 25% error. A genetic algorithm (GA) programmed MATLAB is used to intelligently predict the elevation change for each pixel of an image. This deviation from the pre-event DTM becomes a candidate for the post-event DTM. Thus, each changed DTM is converted into a shadow relief image and compared with a single post-event remotely sensed image for similarity ranking. The candidates ranked in the top two thirds are retained as parent chromosomes to produce offspring in the next generation according to the rules of GAs. When the highest similarity index reaches 0.75, the DTM corresponding to that hillshade image is taken as the calculated post-event DTM. As an example, a pit with known volume is removed from a flat, inclined plane to demonstrate the theoretical capability of the code. The method is able to rapidly estimate the volume of terrain change within an error of 25%, without the delays involved in obtaining stereo image pairs, or the need for ground control points (GCPs) or professional photogrammetry software.
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-09
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available.
NASA Astrophysics Data System (ADS)
Verkaik, A. C.; Beulen, B. W. A. M. M.; Bogaerds, A. C. B.; Rutten, M. C. M.; van de Vosse, F. N.
2009-02-01
To monitor biomechanical parameters related to cardiovascular disease, it is necessary to perform correct volume flow estimations of blood flow in arteries based on local blood velocity measurements. In clinical practice, estimates of flow are currently made using a straight-tube assumption, which may lead to inaccuracies since most arteries are curved. Therefore, this study will focus on the effect of curvature on the axial velocity profile for flow in a curved tube in order to find a new volume flow estimation method. The study is restricted to steady flow, enabling the use of analytical methods. First, analytical approximation methods for steady flow in curved tubes at low Dean numbers (Dn) and low curvature ratios (δ) are investigated. From the results a novel volume flow estimation method, the cos θ-method, is derived. Simulations for curved tube flow in the physiological range (1≤Dn≤1000 and 0.01≤δ≤0.16) are performed with a computational fluid dynamics (CFD) model. The asymmetric axial velocity profiles of the analytical approximation methods are compared with the velocity profiles of the CFD model. Next, the cos θ-method is validated and compared with the currently used Poiseuille method by using the CFD results as input. Comparison of the axial velocity profiles of the CFD model with the approximations derived by Topakoglu [J. Math. Mech. 16, 1321 (1967)] and Siggers and Waters [Phys. Fluids 17, 077102 (2005)] shows that the derived velocity profiles agree very well for Dn≤50 and are fair for 50
IUS/TUG orbital operations and mission support study. Volume 5: Cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
The costing approach, methodology, and rationale utilized for generating cost data for composite IUS and space tug orbital operations are discussed. Summary cost estimates are given along with cost data initially derived for the IUS program and space tug program individually, and cost estimates for each work breakdown structure element.
NASA Astrophysics Data System (ADS)
Batty, Christopher
2017-02-01
This paper introduces a two-dimensional cell-centred finite volume discretization of the Poisson problem on adaptive Cartesian quadtree grids which exhibits second order accuracy in both the solution and its gradients, and requires no grading condition between adjacent cells. At T-junction configurations, which occur wherever resolution differs between neighboring cells, use of the standard centred difference gradient stencil requires that ghost values be constructed by interpolation. To properly recover second order accuracy in the resulting numerical gradients, prior work addressing block-structured grids and graded trees has shown that quadratic, rather than linear, interpolation is required; the gradients otherwise exhibit only first order convergence, which limits potential applications such as fluid flow. However, previous schemes fail or lose accuracy in the presence of the more complex T-junction geometries arising in the case of general non-graded quadtrees, which place no restrictions on the resolution of neighboring cells. We therefore propose novel quadratic interpolant constructions for this case that enable second order convergence by relying on stencils oriented diagonally and applied recursively as needed. The method handles complex tree topologies and large resolution jumps between neighboring cells, even along the domain boundary, and both Dirichlet and Neumann boundary conditions are supported. Numerical experiments confirm the overall second order accuracy of the method in the L∞ norm.
A new, effective and low-cost three-dimensional approach for the estimation of upper-limb volume.
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-05-26
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland-Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was -21.9 mL (-5.7%) (LOA: -62.0 to 18.2 mL; -18.1% to 6.7%) and in measuring the volume of arms' was -9.9 mL (-0.6%) (LOA: -49.6 to 29.8 mL; -2.6% to 1.4%). SkanLab's intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications.
A New, Effective and Low-Cost Three-Dimensional Approach for the Estimation of Upper-Limb Volume
Buffa, Roberto; Mereu, Elena; Lussu, Paolo; Succa, Valeria; Pisanu, Tonino; Buffa, Franco; Marini, Elisabetta
2015-01-01
The aim of this research was to validate a new procedure (SkanLab) for the three-dimensional estimation of total arm volume. SkanLab is based on a single structured-light Kinect sensor (Microsoft, Redmond, WA, USA) and on Skanect (Occipital, San Francisco, CA, USA) and MeshLab (Visual Computing Lab, Pisa, Italy) software. The volume of twelve plastic cylinders was measured using geometry, as the reference, water displacement and SkanLab techniques (two raters and repetitions). The right total arm volume of thirty adults was measured by water displacement (reference) and SkanLab (two raters and repetitions). The bias and limits of agreement (LOA) between techniques were determined using the Bland–Altman method. Intra- and inter-rater reliability was assessed using the intraclass correlation coefficient (ICC) and the standard error of measurement. The bias of SkanLab in measuring the cylinders volume was −21.9 mL (−5.7%) (LOA: −62.0 to 18.2 mL; −18.1% to 6.7%) and in measuring the volume of arms’ was −9.9 mL (−0.6%) (LOA: −49.6 to 29.8 mL; −2.6% to 1.4%). SkanLab’s intra- and inter-rater reliabilities were very high (ICC >0.99). In conclusion, SkanLab is a fast, safe and low-cost method for assessing total arm volume, with high levels of accuracy and reliability. SkanLab represents a promising tool in clinical applications. PMID:26016917
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
NASA Astrophysics Data System (ADS)
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.
Xie, Wen-Jia; Wu, Xiao; Xue, Ren-Liang; Lin, Xiang-Ying; Kidd, Elizabeth A.; Yan, Shu-Mei; Zhang, Yao-Hong; Zhai, Tian-Tian; Lu, Jia-Yang; Wu, Li-Li; Zhang, Hao; Huang, Hai-Hua; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
Purpose: To more accurately define clinical target volume for cervical cancer radiation treatment planning by evaluating tumor microscopic extension toward the uterus body (METU) in International Federation of Gynecology and Obstetrics stage Ib-IIa squamous cell carcinoma of the cervix (SCCC). Patients and Methods: In this multicenter study, surgical resection specimens from 318 cases of stage Ib-IIa SCCC that underwent radical hysterectomy were included. Patients who had undergone preoperative chemotherapy, radiation, or both were excluded from this study. Microscopic extension of primary tumor toward the uterus body was measured. The association between other pathologic factors and METU was analyzed. Results: Microscopic extension toward the uterus body was not common, with only 12.3% of patients (39 of 318) demonstrating METU. The mean (±SD) distance of METU was 0.32 ± 1.079 mm (range, 0-10 mm). Lymphovascular space invasion was associated with METU distance and occurrence rate. A margin of 5 mm added to gross tumor would adequately cover 99.4% and 99% of the METU in the whole group and in patients with lymphovascular space invasion, respectively. Conclusion: According to our analysis of 318 SCCC specimens for METU, using a 5-mm gross tumor volume to clinical target volume margin in the direction of the uterus should be adequate for International Federation of Gynecology and Obstetrics stage Ib-IIa SCCC. Considering the discrepancy between imaging and pathologic methods in determining gross tumor volume extent, we recommend a safer 10-mm margin in the uterine direction as the standard for clinical practice when using MRI for contouring tumor volume.
NASA Technical Reports Server (NTRS)
Chin, M. M.; Goad, C. C.; Martin, T. V.
1972-01-01
A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.
A knowledge-based approach to arterial stiffness estimation using the digital volume pulse.
Jang, Dae-Geun; Farooq, Umar; Park, Seung-Hun; Goh, Choong-Won; Hahn, Minsoo
2012-08-01
We have developed a knowledge based approach for arterial stiffness estimation. The proposed new approach reliably estimates arterial stiffness based on the analysis of age and heart rate normalized reflected wave arrival time. The proposed new approach reduces cost, space, technical expertise, specialized equipment, complexity, and increases the usability compared to recently researched noninvasive arterial stiffness estimators. The proposed method consists of two main stages: pulse feature extraction and linear regression analysis. The new approach extracts the pulse features and establishes a linear prediction equation. On evaluating proposed methodology with pulse wave velocity (PWV) based arterial stiffness estimators, the proposed methodology offered the error rate of 8.36% for men and 9.52% for women, respectively. With such low error rates and increased benefits, the proposed approach could be usefully applied as low cost and effective solution for ubiquitous and home healthcare environments.
NASA Astrophysics Data System (ADS)
Levy, J. S.; Head, J. W.; Fassett, C. I.; Fountain, A. G.
2010-03-01
The morphological properties of two martian depressions suggest ice-cauldron formation. We conduct volumetric and calorimetric estimates showing that up to a cubic km of ice may have been removed in these depressions (melted and/or vaporized).
A method for estimating both the solubility parameters and molar volumes of liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1974-01-01
Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.
Engwell, S L; Aspinall, W P; Sparks, R S J
Characterization of explosive volcanic eruptive processes from interpretation of deposits is a key for assessing volcanic hazard and risk, particularly for infrequent large explosive eruptions and those whose deposits are transient in the geological record. While eruption size-determined by measurement and interpretation of tephra fall deposits-is of particular importance, uncertainties for such measurements and volume estimates are rarely presented. Here, tephra volume estimates are derived from isopach maps produced by modeling raw thickness data as cubic B-spline curves under tension. Isopachs are objectively determined in relation to original data and enable limitations in volume estimates from published maps to be investigated. The eruption volumes derived using spline isopachs differ from selected published estimates by 15-40 %, reflecting uncertainties in the volume estimation process. The formalized analysis enables identification of sources of uncertainty; eruptive volume uncertainties (>30 %) are much greater than thickness measurement uncertainties (~10 %). The number of measurements is a key factor in volume estimate uncertainty, regardless of method utilized for isopach production. Deposits processed using the cubic B-spline method are well described by 60 measurements distributed across each deposit; however, this figure is deposit and distribution dependent, increasing for geometrically complex deposits, such as those exhibiting bilobate dispersion.
NASA Astrophysics Data System (ADS)
Berx, B.; Hansen, B.; Østerhus, S.; Larsen, K. M.; Sherwin, T.; Jochumsen, K.
2013-07-01
From 1994 to 2011, instruments measuring ocean currents (Acoustic Doppler Current Profilers; ADCPs) have been moored on a section crossing the Faroe-Shetland Channel. Together with CTD (Conductivity Temperature Depth) measurements from regular research vessel occupations, they describe the flow field and water mass structure in the channel. Here, we use these data to calculate the average volume transport and properties of the flow of warm water through the channel from the Atlantic towards the Arctic, termed the Atlantic inflow. We find the average volume transport of this flow to be 2.7 ± 0.5 Sv (1 Sv = 106 m3 s-1) between the shelf edge on the Faroe side and the 150 m isobath on the Shetland side. The average heat transport (relative to 0 °C) was estimated to be 107 ± 21 TW (1 TW = 1012 W) and the average salt import to be 98 ± 20 × 106 kg s-1. Transport values for individual months, based on the ADCP data, include a large level of variability, but can be used to calibrate sea level height data from satellite altimetry. In this way, a time series of volume transport has been generated back to the beginning of satellite altimetry in December 1992. The Atlantic inflow has a seasonal variation in volume transport that peaks around the turn of the year and has an amplitude of 0.7 Sv. The Atlantic inflow has become warmer and more saline since 1994, but no equivalent trend in volume transport was observed.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
1985-06-01
U Reproduction of this document in whole or in part is prohibited except with the permission of the Commander, Letterman Army Institute of Research...30 to 70 ml) of porcine blood y placed in a beaker alon with an accurately measured dose of I- bovine albumin or Na 2SO . The blood and isotope were...centrifuged at 12,000 g for 5 minutes. The remaining blood was centrifuged at 29? g for 10 minute plasma was collected and the activity of I- bovine
ERIC Educational Resources Information Center
Warne, Russell T.
2016-01-01
Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…
Scott, P K; Finley, B L; Sung, H M; Schulze, R H; Turner, D B
1997-07-01
The primary health concern associated with chromite ore processing residues (COPR) at sites in Hudson County, NJ, is the inhalation of Cr(VI) suspended from surface soils. Since health-based soil standards for Cr(VI) will be derived using the inhalation pathway, soil suspension modeling will be necessary to estimate site-specific, health-based soil cleanup levels (HBSCLs). The purpose of this study was to identify the most appropriate particulate emission and air dispersion models for estimating soil suspension at these sites based on their theoretical underpinnings, scientific acceptability, and past performance. The identified modeling approach, the AP-42 particulate emission model and the fugitive dust model (FDM), was used to calculate concentrations of airborne Cr(VI) and TSP at two COPR sites. These estimated concentrations were then compared to concentrations measured at each site. The TSP concentrations calculated using the AP-42/FDM soil suspension modeling approach were all within a factor of 3 of the measured concentrations. The majority of the estimated air concentrations were greater than the measured, indicating that the AP-42/FDM approach tends to overestimate on-site concentrations. The site-specific Cr(VI) HBSCLs for these two sites calculated using this conservative soil suspension modeling approach ranged from 190 to 420 mg/kg.
Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.
Piper, Rory J; Yoong, Michael M; Pujar, Suresh; Chin, Richard F
2014-01-01
Background Correcting volumetric measurements of brain structures for intracranial volume (ICV) is important in comparing volumes across subjects with different ICV. The aim of this study was to investigate whether intracranial area (ICA) reliably predicts actual ICV in a healthy pediatric cohort and in children with convulsive status epilepticus (CSE). Methods T1-weighted volumetric MRI was performed on 20 healthy children (control group), 10 with CSE with structurally normal MRI (CSE/MR-), and 12 with CSE with structurally abnormal MRI (CSE/MR+). ICA, using a mid-sagittal slice, and the actual ICV were measured. Results A high Spearman correlation was found between the ICA and ICV measurements in the control (r = 0.96; P < 0.0001), CSE/MR− (r = 0.93; P = 0.0003), and CSE/MR+ (r = 0.94; P < 0.0001) groups. On comparison of predicted and actual ICV, there was no significant difference in the CSE/MR− group (P = 0.77). However, the comparison between predicted and actual ICV was significantly different in the CSE/MR+ (P = 0.001) group. Our Bland–Altman plot showed that the ICA method consistently overestimated ICV in children in the CSE/MR+ group, especially in those with small ICV or widespread structural abnormalities. Conclusions After further validation, ICA measurement may be a reliable alternative to measuring actual ICV when correcting volume measurements for ICV, even in children with localized MRI abnormalities. Caution should be applied when the method is used in children with small ICV and those with multilobar brain pathology. PMID:25365798
Programmatic methods for addressing contaminated volume uncertainties
Rieman, C.R.; Spector, H.L.; Durham, L.A.; Johnson, R.L.
2007-07-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the pre-remedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The U.S. Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in pre-design data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in pre-design characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland 1, Ashland 2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate pre-design contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District. (authors)
White matter atlas of the human spinal cord with estimation of partial volume effect.
Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J
2015-10-01
Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial
Glacier Volume Change Estimation Using Time Series of Improved Aster Dems
NASA Astrophysics Data System (ADS)
Girod, Luc; Nuth, Christopher; Kääb, Andreas
2016-06-01
Volume change data is critical to the understanding of glacier response to climate change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system embarked on the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available to date, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. Here, we developed an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 km and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. Our sensor modeling does not require ground control points and allows thus potentially for the automatic processing of large data volumes. As a proof of concept, we chose a set of glaciers with reference DEMs available to assess the quality of our measurements. We use time series of ASTER scenes from which we extracted DEMs with a ground sampling distance of 15m. Our method directly measures and accounts for the cross-track component of jitter so that the resulting DEMs are not contaminated by this process. Since the along-track component of jitter has the same direction as the stereo parallaxes, the two cannot be separated and the elevations extracted are thus contaminated by along-track jitter. Initial tests reveal no clear relation between the cross-track and along-track components so that the latter seems not to be
Kjelstrom, L.C.; Berenbrock, C.
1996-12-31
The purpose of this report is to provide estimates of the 100-year peak flows and flow volumes that could enter the INEL area from the Big Lost River and Brich Creek are needed as input data for models that will be used to delineate the extent of the 100-year flood plain at the INEL. The methods, procedures and assumptions used to estimate the 100-year peak flows and flow volumes are described in this report.
ERIC Educational Resources Information Center
Blazer, Christie; Froman, Terry; Romanik, Dale
2007-01-01
Research Services calculates enrollment projections on an annual basis. These projections are presented each year at the district's Pupil Population Estimating Conference. For this year's projections, two years of trend data (2006-07 and 2007-08) were used to project student enrollment for 2008-09. Projections are provided by individual grade…
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.
NASA Technical Reports Server (NTRS)
Daly, J. K.
1974-01-01
The programming techniques used to implement the equations and mathematical techniques of the Houston Operations Predictor/Estimator (HOPE) orbit determination program on the UNIVAC 1108 computer are described. Detailed descriptions are given of the program structure, the internal program structure, the internal program tables and program COMMON, modification and maintainence techniques, and individual subroutine documentation.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data is described. The method estimates the installed performance of aircraft gas turbine engines. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag.
Sherwood, J.M.
1993-01-01
Methods are presented to estimate peak-frequency relations, flood hydrographs, and volume-duration-frequency relations of urban streams in Ohio with drainage areas less than 6.5 square miles. The methods were developed to assist planners in the design of hydraulic structures for which hydrograph routing is required or where the temporary storage of water is an important element of the design criteria. Examples of how to use the methods also are presented. The data base for the analyses consisted of 5-minute rainfall-runoff data collected for a period of 5 to 8 years at 62 small drainage basins distributed throughout Ohio. The U.S. Geological Survey rainfall-runoff model A634 was used and was calibrated for each site. The calibrayed models were used in conjunction with long-term (66-87 years) rainfall and evaporation records to synthesize a long-term series of flood-hydrograph records at each site. A method was developed and used to increase the variance of the synthetic flood characterictics in order to make them more representative of observed flood characteristics. Multiple-regression equations were developed to estimate peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The explanatory variables in the peak-discharge equations are drainage area, average annual precipitation, and basin development factor. Average standard errors of prediction for the peak-frequency equations range from ? 34 to ? 40 percent. A method is presented to estimate flood hydrographs by applying a specific peak discharge and basin lagtime to a dimensionless hydrograph. An equation was developed to estimate basin lagtime in which main-channel length divided by the square root of the main-channel slope (L/SL) and basin-development factor are the explanatory variables and the average standard error of prediction is ? 53 percent. A dimensional hydrograph originally developed by the U.S. Geological Survey for use in Georgia was verified for use in urban areas of
Estimating the volume and age of water stored in global lakes using a geo-statistical approach
NASA Astrophysics Data System (ADS)
Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver
2016-12-01
Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively.
Estimating the volume and age of water stored in global lakes using a geo-statistical approach.
Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver
2016-12-15
Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 10(6) km(2) (1.8% of global land area), a total shoreline length of 7.2 × 10(6) km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 10(3) km(3) (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively.
Xu, Ming; Lei, Zhipeng; Yang, James
2015-01-01
N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85.
Estimating the volume and age of water stored in global lakes using a geo-statistical approach
Messager, Mathis Loïc; Lehner, Bernhard; Grill, Günther; Nedeva, Irena; Schmitt, Oliver
2016-01-01
Lakes are key components of biogeochemical and ecological processes, thus knowledge about their distribution, volume and residence time is crucial in understanding their properties and interactions within the Earth system. However, global information is scarce and inconsistent across spatial scales and regions. Here we develop a geo-statistical model to estimate the volume of global lakes with a surface area of at least 10 ha based on the surrounding terrain information. Our spatially resolved database shows 1.42 million individual polygons of natural lakes with a total surface area of 2.67 × 106 km2 (1.8% of global land area), a total shoreline length of 7.2 × 106 km (about four times longer than the world's ocean coastline) and a total volume of 181.9 × 103 km3 (0.8% of total global non-frozen terrestrial water stocks). We also compute mean and median hydraulic residence times for all lakes to be 1,834 days and 456 days, respectively. PMID:27976671
NASA Astrophysics Data System (ADS)
Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya
2015-04-01
The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions
NASA Astrophysics Data System (ADS)
Valori, Gherardo; Pariat, Etienne; Anfinogentov, Sergey; Chen, Feng; Georgoulis, Manolis K.; Guo, Yang; Liu, Yang; Moraitis, Kostas; Thalmann, Julia K.; Yang, Shangbin
2016-11-01
Magnetic helicity is a conserved quantity of ideal magneto-hydrodynamics characterized by an inverse turbulent cascade. Accordingly, it is often invoked as one of the basic physical quantities driving the generation and structuring of magnetic fields in a variety of astrophysical and laboratory plasmas. We provide here the first systematic comparison of six existing methods for the estimation of the helicity of magnetic fields known in a finite volume. All such methods are reviewed, benchmarked, and compared with each other, and specifically tested for accuracy and sensitivity to errors. To that purpose, we consider four groups of numerical tests, ranging from solutions of the three-dimensional, force-free equilibrium, to magneto-hydrodynamical numerical simulations. Almost all methods are found to produce the same value of magnetic helicity within few percent in all tests. In the more solar-relevant and realistic of the tests employed here, the simulation of an eruptive flux rope, the spread in the computed values obtained by all but one method is only 3 %, indicating the reliability and mutual consistency of such methods in appropriate parameter ranges. However, methods show differences in the sensitivity to numerical resolution and to errors in the solenoidal property of the input fields. In addition to finite volume methods, we also briefly discuss a method that estimates helicity from the field lines' twist, and one that exploits the field's value at one boundary and a coronal minimal connectivity instead of a pre-defined three-dimensional magnetic-field solution.
Saha, K; Shahida, S M; Chowdhury, N I; Mostafa, G; Saha, S K; Jahan, S
2014-10-01
Low birth weight (LBW) baby predisposes to long term renal disease, adult hypertension and related cardiovascular disease. This could be due to reduced nephron number in early life. From different studies, it is becoming increasingly clear that nephron number, indirectly reflected in renal volume may be related with normal or retarded foetal growth. This prospective study was undertaken in the department of Obstetric and Gynae in Bangabandhu Sheikh Mujib Medical University, Dhaka, Bangladesh. One hundred pregnant women were included in this study and divided into two groups (IUGR and normally growing foetuses). Forty one foetuses weighted less than 2.5kg and fifty nine foetuses weighed 2.5kg or more. Kidney dimensions and estimated feotal weight were measured by USG by the same ultrasonologist. There were no significant difference between two groups regarding age, height, weight, and parity. The subjects with intrauterine growth retardation had smaller head circumference, abdominal circumferences, biparietal diameters, femur length, estimated foetal weight and lower amniotic fluid indices than did the subjects with non-intrauterine growth retardation (IUGR). All biometric data shows significant deference except head circumference (HC). Intrauterine growth retardation (IUGR) foetus had significantly lower kidney volume than normally growing foetuses.
NASA Astrophysics Data System (ADS)
Pandey, Apoorva; Venkataraman, Chandra
2014-12-01
Urbanization and rising household incomes in India have led to growing transport demand, particularly during 1990-2010. Emissions from transportation have been implicated in air quality and climate effects. In this work, emissions of particulate matter (PM2.5 or mass concentration of particles smaller than 2.5 um diameter), black carbon (BC) and organic carbon (OC), were estimated from the transport sector in India, using detailed technology divisions and regionally measured emission factors. Modes of transport addressed in this work include road transport, railways, shipping and aviation, but exclude off-road equipment like diesel machinery and tractors. For road transport, a vehicle fleet model was used, with parameters derived from vehicle sales, registration data, and surveyed age-profile. The fraction of extremely high emitting vehicles, or superemitters, which is highly uncertain, was assumed as 20%. Annual vehicle utilization estimates were based on regional surveys and user population. For railways, shipping and aviation, a top-down approach was applied, using nationally reported fuel consumption. Fuel use and emissions from on-road vehicles were disaggregated at the state level, with separate estimates for 30 cities in India. The on-road fleet was dominated by two-wheelers, followed by four-and three-wheelers, with new vehicles comprising the majority of the fleet for each vehicle type. A total of 276 (-156, 270) Gg/y PM2.5, 144 (-99, 207) Gg/y BC, and 95 (-64, 130) Gg/y OC emissions were estimated, with over 97% contribution from on-road transport. Largest emitters were identified as heavy duty diesel vehicles for PM2.5 and BC, but two-stroke vehicles and superemitters for OC. Old vehicles (pre-2005) contributed significantly more (∼70%) emissions, while their share in the vehicle fleet was smaller (∼45%). Emission estimates were sensitive to assumed superemitter fraction. Improvement of emission estimates requires on-road emission factor measurements
NASA Technical Reports Server (NTRS)
Martin, T. V.; Mullins, N. E.
1972-01-01
The operating and set-up procedures for the multi-satellite, multi-arc GEODYN- Orbit Determination program are described. All system output is analyzed. The GEODYN Program is the nucleus of the entire GEODYN system. It is a definitive orbit and geodetic parameter estimation program capable of simultaneously processing observations from multiple arcs of multiple satellites. GEODYN has two modes of operation: (1) the data reduction mode and (2) the orbit generation mode.
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
NASA Astrophysics Data System (ADS)
McCook, G. P.; Guinan, E. F.; Saumon, D.; Kang, Y. W.
1997-05-01
CM Draconis (Gl 630.1; Vmax = +12.93) is an important eclipsing binary consisting of two dM4.5e stars with an orbital period of 1.2684 days. This binary is a high velocity star (s= 164 km/s) and the brighter member of a common proper motion pair with a cool faint white dwarf companion (LP 101-16). CM Dra and its white dwarf companion were once considered by Zwicky to belong to a class of "pygmy stars", but they turned out to be ordinary old, cool white dwarfs or faint red dwarfs. Lacy (ApJ 218,444L) determined the first orbital and physical properties of CM Dra from the analysis of his light and radial velocity curves. In addition to providing directly measured masses, radii, and luminosities for low mass stars, CM Dra was also recognized by Lacy and later by Paczynski and Sienkiewicz (ApJ 286,332) as an important laboratory for cosmology, as a possible old Pop II object where it may be possible to determine the primordial helium abundance. Recently, Metcalfe et al.(ApJ 456,356) obtained accurate RV measures for CM Dra and recomputed refined elements along with its helium abundance. Starting in 1995, we have been carrying out intensive RI photoelectric photometry of CM Dra to obtain well defined, accurate light curves so that its fundamental properties can be improved, and at the same time, to search for evidence of planets around the binary from planetary transit eclipses. During 1996 and 1997 well defined light curves were secured and these were combined with the RV measures of Metcalfe et al. (1996) to determine the orbital and physical parameters of the system, including a refined orbital period. A recent version of the Wilson-Devinney program was used to analyze the data. New radii, masses, mean densities, Teff, and luminosities were found as well as a re-determination of the helium abundance (Y). The results of the recent analyses of the light and RV curves will be presented and modelling results discussed. This research is supported by NSF grants AST-9315365
Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M
2016-08-01
Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air.
Delaney, P.T.; McTigue, D.F.
1994-01-01
An elastic point source model proposed by Mogi for magma chamber inflation and deflation has been applied to geodetic data collected at many volcanoes. The volume of ground surface uplift or subsidence estimated from this model is closely related to the volume of magma injection into or withdrawal from the reservoir below. The analytical expressions for these volumes are reviewed for a spherical chamber and it is shown that they differ by the factor 2(1-v), where v is Poisson's ratio of the host rock. For the common estimate v=0.25, as used by Mogi and subsequent workers, the uplift volume is 3/2 the injection volume. For highly fractured rocks, v can be even less and the uplift volume can approach twice the injection volume. Unfortunately, there is no single relation between the inflation of magma reservoirs and the dilation or contraction of host rocks. The inflation of sill-like bodies, for instance, generates no overall change in host rock volume. Inflation of dike-like bodies generates contraction such that, in contrast with Mogi's result, the uplift volume is generally less than the injection volume; for v=0.25, the former is only 3/4 of the latter. Estimates of volumes of magma injection or withdrawal are there-fore greatly dependent on the magma reservoir configuration. Ground surface tilt data collected during the 1960 collapse of Kilauea crater, one of the first events interpreted with Mogi's model and one of the largest collapses measured at Kilauea, is not favored by any one of a variety of deformation models. These models, however, predict substantially different volumes of both magma withdrawal and ground surface subsidence. ?? 1994 Springer-Verlag.
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.
Orbital Spacecraft Consumables Resupply System (OSCRS). Volume 3: Program Cost Estimate
NASA Technical Reports Server (NTRS)
Perry, D. L.
1986-01-01
A cost analysis for the design, development, qualification, and production of the monopropellant and bipropellant Orbital Spacecraft Consumable Resupply System (OSCRS) tankers, their associated avionics located in the Orbiter payload bay, and the unique ground support equipment (GSE) and airborne support equipment (ASE) required to support operations is presented. Monopropellant resupply for the Gamma Ray Observatory (GRO) in calendar year 1991 is the first defined resupply mission with bipropellant resupply missions expected in the early to mid 1990's. The monopropellant program estimate also includes contractor costs associated with operations support through the first GRO resupply mission.
Budget estimates: Fiscal year 1994. Volume 3: Research and program management
NASA Technical Reports Server (NTRS)
1994-01-01
The research and program management (R&PM) appropriation provides the salaries, other personnel and related costs, and travel support for NASA's civil service workforce. This FY 1994 budget funds costs associated with 23,623 full-time equivalent (FTE) work years. Budget estimates are provided for all NASA centers by categories such as space station and new technology investments, space flight programs, space science, life and microgravity sciences, advanced concepts and technology, center management and operations support, launch services, mission to planet earth, tracking and data programs, aeronautical research and technology, and safety, reliability, and quality assurance.
NASA Astrophysics Data System (ADS)
Ghosh, S.; Bergquist, B. A.; Schauble, E. A.; Blum, J. D.
2009-05-01
Mercury is a globally distributed pollutant; the toxicity and biomagnifications in aquatic food chains, even in remote areas, makes it a serious worldwide problem. Similar to other stable isotope systems, the isotopic composition of environmental Hg is potentially a new tool to better understand the biogeochemical cycling, fluxes and anthropogenic impacts of Hg. The promise of Hg isotopes is even more exciting with the recent discovery of large mass independent fractionation (MIF) displayed by the odd Hg isotopes (199Hg and 201Hg). Based on current theory MIF of Hg isotopes can arise either from the non-linear scaling of nuclear volume with mass for heavy isotopes (Nuclear Volume Effect, NVE) or from the magnetic isotope effect (MIE), which is due to the non-zero nuclear spin and nuclear magnetic moments of odd-mass isotopes. In order to interpret and use Hg MIF signatures in nature, both experimental and theoretical work is needed to better understand the controls and expression of MIF along with the underlying mechanisms of MIF. The goal of the current study was to design an experiment that would express the NVE in order to confirm theoretical predictions of the isotopic signature of the NVE for Hg. Unfortunately, both NVE and MIE predict MIF for only the odd isotopes. However since MIE is a kinetic phenomenon only, MIF observed in equilibrium reactions should be attributable to the NVE only. Thus it should be possible to isolate NVE driven MIF from MIE driven MIF. A laboratory experiment was designed on equilibrium octanol-water partitioning of different Hg chloride species. Octanol-water partitioning of Hg depends on the hydrophobicity of the Hg species, so non polar lipophilic species partition into the octanol phase while polar species remain in water phase. At 25 degree Celsius, a Cl- concentration of 1 molar and pH <2, the dominant aqueous phase is HgCl42- while non polar HgCl2 will partition into the octanol phase. Since HgCl42- has a stronger ionic
Forman, Michele R; Zhu, Yeyi; Hernandez, Ladia M; Himes, John H; Dong, Yongquan; Danish, Robert K; James, Kyla E; Caulfield, Laura E; Kerver, Jean M; Arab, Lenore; Voss, Paula; Hale, Daniel E; Kanafani, Nadim; Hirschfeld, Steven
2014-09-01
Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R(2) = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R(2) = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R(2) = 0.87), ULR (R(2) = 0.85), and ULG (R(2) = 0.88) was less comparable with arm span (R(2) = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children.
NASA Astrophysics Data System (ADS)
Ashwin, T. R.; McGordon, A.; Widanage, W. D.; Jennings, P. A.
2017-02-01
The Pseudo Two Dimensional (P2D) porous electrode model is less preferred for real time calculations due to the high computational expense and complexity in obtaining the wide range of electro-chemical parameters despite of its superior accuracy. This paper presents a finite volume based method for re-parametrising the P2D model for any cell chemistry with uncertainty in determining precise electrochemical parameters. The re-parametrisation is achieved by solving a quadratic form of the Butler-Volmer equation and modifying the anode open circuit voltage based on experimental values. Thus the only experimental result, needed to re-parametrise the cell, reduces to the measurement of discharge voltage for any C-rate. The proposed method is validated against the 1C discharge data and an actual drive cycle of a NCR18650BD battery with NCA chemistry when driving in an urban environment with frequent accelerations and regenerative braking events. The error limit of the present model is compared with the electro-chemical prediction of LiyCoO2 battery and found to be superior to the accuracy of the model presented in the literature.